Chapter 4 - Tail inequalities
|
|
- Jasmin Griffith
- 5 years ago
- Views:
Transcription
1 Chapter 4 - Tail inequalities Igor Gonopolskiy & Nimrod Milo BGU November 10, 2009 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
2 Outline 1 Definitions 2 Chernoff Bound Upper deviation Lower deviation 3 The wiring problem Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
3 Outline 1 Definitions 2 Chernoff Bound Upper deviation Lower deviation 3 The wiring problem Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
4 Experiment types Bernoulli trails Let X 1,, X n be { A set of n independent Bernoulli trails such that, for 1 with probability p 1 i n X i =. 0 otherwise Let X = n i=1 X i then X is said to have the binomial distribution. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
5 Experiment types Bernoulli trails Let X 1,, X n be { A set of n independent Bernoulli trails such that, for 1 with probability p 1 i n X i =. 0 otherwise Let X = n i=1 X i then X is said to have the binomial distribution. Poisson trails The general case of the above distribution is when each trail X i has a success probability p i. Let X 1,, X n be { A set of n independent Bernoulli trails such that, for 1 with probability pi 1 i n X i =. 0 otherwise Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
6 Type of questions we would like to answer We consider two questions regarding the deviation of X from its expectation µ = n i=1 p i. for a real number δ > 0: Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
7 Type of questions we would like to answer We consider two questions regarding the deviation of X from its expectation µ = n i=1 p i. for a real number δ > 0: what is the probability that X exceeds (1 + δ)µ? - useful in analyzing an algorithm, showing that the chance it fails to achieve a solution is small. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
8 Type of questions we would like to answer We consider two questions regarding the deviation of X from its expectation µ = n i=1 p i. for a real number δ > 0: what is the probability that X exceeds (1 + δ)µ? - useful in analyzing an algorithm, showing that the chance it fails to achieve a solution is small. how large must δ be in order that the tail probability is less then a prescribed value ɛ? - useful when designing an algorithm. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
9 Outline 1 Definitions 2 Chernoff Bound Upper deviation Lower deviation 3 The wiring problem Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
10 Chernoff Bound Theorem (4.1) Let X 1,, X n be independent Poisson trails such that, for 1 i n, Pr[X i = 1] = p i, Then for X = n i=1 X i, µ = E[X ] = n i=1 p i and any δ > 0, [ e δ ] µ Pr[X > (1 + δ)µ] < (1 + δ) (1+δ) Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
11 Chernoff Bound - Proof Pr[X > (1 + δ)µ] = Pr[exp(tX ) > exp(t(1 + δ)µ]) < E[exp(tx)] exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
12 Chernoff Bound - Proof Applying X exp(tx ) to both sides Pr[X > (1 + δ)µ] = Pr[exp(tX ) > exp(t(1 + δ)µ]) < E[exp(tx)] exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
13 Chernoff Bound - Proof Applying X exp(tx ) to both sides Pr[X > (1 + δ)µ] = Pr[exp(tX ) > exp(t(1 + δ)µ]) < E[exp(tx)] exp(t(1 + δ)µ Applying the Markov inequality Pr[X a] E[X ] a to the right-hand side. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
14 Chernoff Bound - Proof Pr[X > (1 + δ)µ] < E[exp(tx)] exp(t(1 + δ)µ = Πn i=1 E[exp(tX i)] exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
15 Chernoff Bound - Proof What we had before Pr[X > (1 + δ)µ] < E[exp(tx)] exp(t(1 + δ)µ = Πn i=1 E[exp(tX i)] exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
16 Chernoff Bound - Proof What we had before Pr[X > (1 + δ)µ] < E[exp(tx)] exp(t(1 + δ)µ = Πn i=1 E[exp(tX i)] exp(t(1 + δ)µ n E[exp(tX )] = E[exp(t X i )] = E[Πi=1exp(tX n i )] and because X i i=1 are independent exp(tx i ) are also independent. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
17 Chernoff Bound - Proof Pr[X > (1 + δ)µ] < Πn i=1 E[exp(tX i)] exp(t(1 + δ)µ = Πn i=1 [p ie t + 1 p i ] exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
18 Chernoff Bound - Proof What we had before Pr[X > (1 + δ)µ] < Πn i=1 E[exp(tX i)] exp(t(1 + δ)µ = Πn i=1 [p ie t + 1 p i ] exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
19 Chernoff Bound - Proof Pr[X > (1 + δ)µ] < Πn i=1 E[exp(tX i)] exp(t(1 + δ)µ = Πn i=1 [p ie t + 1 p i ] exp(t(1 + δ)µ The random variable e tx i the value 1 with 1 p i. assumes the value e t with probability p i and Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
20 Chernoff Bound - Proof Π n i=1 E[exp(tX i)] exp(t(1 + δ)µ = Πn i=1 [p ie t + 1 p i ] exp(t(1 + δ)µ = Πn i=1 [1 + p i(e t 1)] exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
21 Chernoff Bound - Proof Pr[X > (1 + δ)µ] < Πn i=1 [1 + p i(e t 1)] exp(t(1 + δ)µ = Πn i=1 exp(p i(e t 1)) exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
22 Chernoff Bound - Proof What we had before Pr[X > (1 + δ)µ] < Πn i=1 [1 + p i(e t 1)] exp(t(1 + δ)µ = Πn i=1 exp(p i(e t 1)) exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
23 Chernoff Bound - Proof Pr[X > (1 + δ)µ] < Πn i=1 [1 + p i(e t 1)] exp(t(1 + δ)µ = Πn i=1 exp(p i(e t 1)) exp(t(1 + δ)µ we use the inequality 1 + x < e x with x = p i (e t 1). Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
24 Chernoff Bound - Proof Π n i=1 [1 + p i(e t 1)] exp(t(1 + δ)µ = Πn i=1 exp(p i(e t 1)) exp(t(1 + δ)µ = exp( n i=1 p i(e t 1)) exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
25 Chernoff Bound - Proof Π n i=1 [1 + p i(e t 1)] exp(t(1 + δ)µ = exp( n i=1 p i(e t 1)) exp(t(1 + δ)µ = exp((et 1)µ) exp(t(1 + δ)µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
26 Chernoff Bound - Proof Until now we have proved that: Pr[X > (1 + δ)µ] < exp((et 1)µ) exp(t(1 + δ)µ for any possible t to get the best value of t we can differentiate the last expression with respect to t and set to zero. the value that we get is t = ln(1 + δ) which is positive for δ > 0. applying the value to the above formula we obtain our theorem: [ e δ Pr[X > (1 + δ)µ] < (1 + δ) (1+δ) ] µ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
27 Chernoff Bound - Proof Until now we have proved that: Pr[X > (1 + δ)µ] < exp((et 1)µ) exp(t(1 + δ)µ for any possible t to get the best value of t we can differentiate the last expression with respect to t and set to zero. the value that we get is t = ln(1 + δ) which is positive for δ > 0. applying the value to the above formula we obtain our theorem: [ e δ Pr[X > (1 + δ)µ] < (1 + δ) (1+δ) Definition 4.1 we define an upper tail bound function for the sum of Poisson trails. F + (µ, δ) = [ e δ (1 + δ) 1+δ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28 ] µ ] µ
28 Example example The Aardvarks win each game with probability of 1/3. assuming that all the games are independent we want to calculate the chance that they have a winning season (n games in a season) Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
29 Example example The Aardvarks win each game with probability of 1/3. assuming that all the games are independent we want to calculate the chance that they have a winning season (n games in a season) solution Let X i be 1 if the Aardvarks win the ith game and 0 otherwise. Let Y n = n i=1 X i. by applying 4.1 to Y n we get: Pr[Y n > n/2] < F + (n/3, 1/2) < (0.965) n Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
30 Chernoff bound - below expectation We now want to consider deviation of X below its expectation µ. Theorem (4.6) Let X 1,, X n be independent Poisson trails such that, for 1 i n, Pr[X i = 1] = p i, Then for X = n i=1 X i, µ = E[X ] = n i=1 p i and any 0 < δ 1, Pr[X < (1 δ)µ] < e µδ2 2 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
31 Chernoff Bound Proof We will not prove this version of the Chernoff bound because the proof is similar to the upper tail with some small differences. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
32 Chernoff Bound Proof We will not prove this version of the Chernoff bound because the proof is similar to the upper tail with some small differences. Definition 4.2 we define an lower tail bound function for the sum of Poisson trails. F (µ, δ) = µδ2 e 2 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
33 Example example The Aardvarks hire a new coach, and now the chance to win each game is what is the probability that they suffer a losing season? Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
34 Example example The Aardvarks hire a new coach, and now the chance to win each game is what is the probability that they suffer a losing season? solution Let X i be 1 if the Aardvarks win the ith game and 0 otherwise. Let Y n = n i=1 X i. by applying 4.2 to Y n we get: Pr[Y n < n/2] < F (0.75n, 1/3) < (0.9592) n Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
35 answering the second type of questions Until now we have answered the first kind of questions. to answer the second type: how large must δ be in order that the tail probability is less then a prescribed value ɛ? We make the following definitions Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
36 answering the second type of questions Until now we have answered the first kind of questions. to answer the second type: how large must δ be in order that the tail probability is less then a prescribed value ɛ? We make the following definitions Definition 4.7 for any positive µ and ɛ, Let + (µ, ɛ) be the value of δ that satisfies: F + (µ, + (µ, ɛ)) = ɛ In other words we want a δ = + (µ, ɛ) deviation from µ suffices to keep Pr[X > (1 + δ)µ] below ɛ Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
37 answering the second type of questions Until now we have answered the first kind of questions. to answer the second type: how large must δ be in order that the tail probability is less then a prescribed value ɛ? We make the following definitions Definition 4.7 for any positive µ and ɛ, Let + (µ, ɛ) be the value of δ that satisfies: F + (µ, + (µ, ɛ)) = ɛ In other words we want a δ = + (µ, ɛ) deviation from µ suffices to keep Pr[X > (1 + δ)µ] below ɛ Definition 4.8 (µ, ɛ) can be defined in the same way. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
38 Outline 1 Definitions 2 Chernoff Bound Upper deviation Lower deviation 3 The wiring problem Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
39 The wiring problem - definition The input of the problem is: Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
40 The wiring problem - definition The input of the problem is: a n n matrix of gates (total of n gates) arranged at regularly spaced points in the plane. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
41 The wiring problem - definition The input of the problem is: a n n matrix of gates (total of n gates) arranged at regularly spaced points in the plane Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
42 The wiring problem - definition The input of the problem is: a n n matrix of gates (total of n gates) arranged at regularly spaced points in the plane. a set of nets - two gates connected by a wire. all wires run above gates in Manhattan form (parallel to the axis) Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
43 The wiring problem - definition The input of the problem is: a n n matrix of gates (total of n gates) arranged at regularly spaced points in the plane. a set of nets - two gates connected by a wire. all wires run above gates in Manhattan form (parallel to the axis) < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
44 the wiring problem general solution A solution S is an assignment of global routes to all the given nets such that each wire has at most one turn. also we would like to limit the maximum number of wires that pass in a boundary b between two gates in the array. Let denote the limit as w. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
45 the wiring problem general solution A solution S is an assignment of global routes to all the given nets such that each wire has at most one turn. also we would like to limit the maximum number of wires that pass in a boundary b between two gates in the array. Let denote the limit as w. definition for a boundary b between two gates in the array let w S (b) be the number of wires that pass through b in a solution S. Let W S = max b W s (b) be the maximum number of wires through any boundary in the solution S. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
46 the wiring problem general solution A solution S is an assignment of global routes to all the given nets such that each wire has at most one turn. also we would like to limit the maximum number of wires that pass in a boundary b between two gates in the array. Let denote the limit as w. definition for a boundary b between two gates in the array let w S (b) be the number of wires that pass through b in a solution S. Let W S = max b W s (b) be the maximum number of wires through any boundary in the solution S. observation If we can minimize the value of w S then we can decide on the feasibility of the problem. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
47 example < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
48 example < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
49 example < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
50 example < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
51 example b < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
52 example b < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > w S (b ) = 1 Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
53 example b b < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > w S (b ) = 1 Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
54 example b b < 2, 7 >, < 8, 3 >, < 4, 9 >, < 7, 3 > w S (b ) = 1 w S (b ) = 2 Figure: a 9 gate example 3 3 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
55 zero-one Integer programming In general, a zero-one integer linear programming problem can be defined as: Optimizing a function w = c 1 x 1 + c 2 x 2 s.t. some constraints like: a 1 1x 1 + a 1 2x 2 b 1 a 2 1x 1 + a 2 2x 2 b 2 and so on... also we demand that all the variables x i are integers from a range {a..b} Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
56 Casting the Wiring problem to integer programming definitions routes for a net i we use two variables x i0 and x i1 to indicate which one of the routes will be used (horizontal first vs. vertical first). thus x i0 will be 1 if the wire goes horizontally first starting from the left end-point of net i, and 0 otherwise. boundary For each boundary b we define: T by = {i net ipasses through b if x iy = 1} where y 0, 1 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
57 Applying the IP programming to our problem With the last definitions our IP can be expressed as: minimize function w where x i0 + x i1 w i T b0 i T b1 s.t. where x i0 + x i1 = 1( net i) x i0, x i1 {0, 1}( net i) Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
58 Applying the IP programming to our problem With the last definitions our IP can be expressed as: minimize function w where x i0 + x i1 w i T b0 i T b1 s.t. where x i0 + x i1 = 1( net i) x i0, x i1 {0, 1}( net i) Question so what is the problem? Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
59 Applying the IP programming to our problem With the last definitions our IP can be expressed as: minimize function w where x i0 + x i1 w i T b0 i T b1 s.t. where x i0 + x i1 = 1( net i) x i0, x i1 {0, 1}( net i) Question so what is the problem? Answer The IP problem in NP hard Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
60 The solution - relaxation plus randomization relaxation Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
61 The solution - relaxation plus randomization relaxation we want to allow a less restrictive set of routes Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
62 The solution - relaxation plus randomization relaxation we want to allow a less restrictive set of routes allow x i0 and x i1 to get real numbers between 0 and 1. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
63 The solution - relaxation plus randomization relaxation we want to allow a less restrictive set of routes allow x i0 and x i1 to get real numbers between 0 and 1. let xˆ i0 and xˆ i1 for 1 i n be the solutions given by the linear program. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
64 The solution - relaxation plus randomization relaxation we want to allow a less restrictive set of routes allow x i0 and x i1 to get real numbers between 0 and 1. let xˆ i0 and xˆ i1 for 1 i n be the solutions given by the linear program. we denote ŵ the solution of the relaxation and w 0 the result of the IP. it is clear that w 0 ŵ. notice that each value can be fractional values so the solution may not be feasible. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
65 The solution - relaxation plus randomization relaxation we want to allow a less restrictive set of routes allow x i0 and x i1 to get real numbers between 0 and 1. let xˆ i0 and xˆ i1 for 1 i n be the solutions given by the linear program. we denote ŵ the solution of the relaxation and w 0 the result of the IP. it is clear that w 0 ŵ. notice that each value can be fractional values so the solution may not be feasible. rounding Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
66 The solution - relaxation plus randomization relaxation we want to allow a less restrictive set of routes allow x i0 and x i1 to get real numbers between 0 and 1. let xˆ i0 and xˆ i1 for 1 i n be the solutions given by the linear program. we denote ŵ the solution of the relaxation and w 0 the result of the IP. it is clear that w 0 ŵ. notice that each value can be fractional values so the solution may not be feasible. rounding we will use randomized rounding Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
67 The solution - relaxation plus randomization relaxation we want to allow a less restrictive set of routes allow x i0 and x i1 to get real numbers between 0 and 1. let xˆ i0 and xˆ i1 for 1 i n be the solutions given by the linear program. we denote ŵ the solution of the relaxation and w 0 the result of the IP. it is clear that w 0 ŵ. notice that each value can be fractional values so the solution may not be feasible. rounding we will use randomized rounding independently for each i, set x i0 to 1 and x i1 to 0 with probability xˆ i0 ; otherwise set x i0 to 0 and x i1 to 1. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
68 Correctness of the algorithm Theorem (4.8) Let ɛ be a real number such that 0 < ɛ < 1. Then with probability 1 ɛ the randomized algorithm provide a solution S satisfies: w S ŵ(1 + + (ŵ, ɛ/2n)) w 0 (1 + + (w 0, ɛ/2n)) Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
69 Correctness of the algorithm Theorem (4.8) Let ɛ be a real number such that 0 < ɛ < 1. Then with probability 1 ɛ the randomized algorithm provide a solution S satisfies: w S ŵ(1 + + (ŵ, ɛ/2n)) w 0 (1 + + (w 0, ɛ/2n)) And now in words! with at least 1 ɛ, no boundary in the array has more than w 0 (1 + + (w 0, ɛ/2n)) (while w 0 is the optimal solution. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
70 proof We will prove for any boundary b that w(b) S > w 0 (1 + + (w 0, ɛ/2n)) is at most ɛ/2n. then since there are O(2n) boundaries we can sum the probability for w S. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
71 proof Consider a boundary b, since the solution of the LP satisfies the constraints, we have xˆ i0 + xˆ i1 ŵ (1) i T b0 i T b1 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
72 proof i T b0 xˆ i0 + i T b1 xˆ i1 ŵ (1) as before The number of wires passing through b in a solution S is: w(b) S = x i0 + x i1 (2) i T b0 i T b1 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
73 proof i T b0 xˆ i0 + w(b) S = i T b0 i T b1 xˆ i1 ŵ (1) x i0 + i T b1 x i1 (2) because x i0, x i1 are independent Poisson trails with probability of xˆ i0, xˆ i1 we apply the following on the second equation: E[w(b) S ] = E[ x i0 ] + E[ x i1 ] (3) i T b0 i T b1 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
74 proof i T b0 xˆ i0 + w(b) S = i T b0 i T b1 xˆ i1 ŵ (1) x i0 + i T b1 x i1 (2) E[w(b) S ] = E[ x i0 ] + E[ x i1 ] (3) i T b0 i T b1 notice that E[ x i0 ] = xˆ i0 (the same for x i1 ). E[w(b) S ] = xˆ i0 + xˆ i1 ŵ (4) i T b0 i T b1 Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
75 proof i T b0 xˆ i0 + w(b) S = i T b0 i T b1 xˆ i1 ŵ (1) x i0 + i T b1 x i1 (2) E[w(b) S ] = E[ x i0 ] + E[ x i1 ] (3) i T b0 i T b1 E[w(b) S ] = xˆ i0 + xˆ i1 ŵ (4) i T b0 By the definition of + (ŵ, ɛ/2n) we get i T b1 Pr[w(b) S > ŵ(1 + + (ŵ, ɛ/2n))] ɛ/2n Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
76 real time performance Question How good is the bound on + (ŵ, ɛ/2n)? Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
77 real time performance Question How good is the bound on + (ŵ, ɛ/2n)? Answer It depends on the value of w 0. we can show that if w 0 = n c where c is a positive constant that we find a solution with an additive term that is vanishingly small as n grows. on the other end, if w 0 = 20 then w S = O(logn/loglogn). thus the algorithm is likely to perform well provided w 0 is not too small. Igor Gonopolskiy & Nimrod Milo (BGU) Chapter 4 November 10, / 28
Tail Inequalities Randomized Algorithms. Sariel Har-Peled. December 20, 2002
Tail Inequalities 497 - Randomized Algorithms Sariel Har-Peled December 0, 00 Wir mssen wissen, wir werden wissen (We must know, we shall know) David Hilbert 1 Tail Inequalities 1.1 The Chernoff Bound
More informationTail Inequalities. The Chernoff bound works for random variables that are a sum of indicator variables with the same distribution (Bernoulli trials).
Tail Inequalities William Hunt Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV William.Hunt@mail.wvu.edu Introduction In this chapter, we are interested
More information11.1 Set Cover ILP formulation of set cover Deterministic rounding
CS787: Advanced Algorithms Lecture 11: Randomized Rounding, Concentration Bounds In this lecture we will see some more examples of approximation algorithms based on LP relaxations. This time we will use
More informationChernoff Bounds. Theme: try to show that it is unlikely a random variable X is far away from its expectation.
Chernoff Bounds Theme: try to show that it is unlikely a random variable X is far away from its expectation. The more you know about X, the better the bound you obtain. Markov s inequality: use E[X ] Chebyshev
More informationSecond main application of Chernoff: analysis of load balancing. Already saw balls in bins example. oblivious algorithms only consider self packet.
Routing Second main application of Chernoff: analysis of load balancing. Already saw balls in bins example synchronous message passing bidirectional links, one message per step queues on links permutation
More informationvariance of independent variables: sum of variances So chebyshev predicts won t stray beyond stdev.
Announcements No class monday. Metric embedding seminar. Review expectation notion of high probability. Markov. Today: Book 4.1, 3.3, 4.2 Chebyshev. Remind variance, standard deviation. σ 2 = E[(X µ X
More informationLecture 4: Sampling, Tail Inequalities
Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /
More informationWith high probability
With high probability So far we have been mainly concerned with expected behaviour: expected running times, expected competitive ratio s. But it would often be much more interesting if we would be able
More informationExpectation, inequalities and laws of large numbers
Chapter 3 Expectation, inequalities and laws of large numbers 3. Expectation and Variance Indicator random variable Let us suppose that the event A partitions the sample space S, i.e. A A S. The indicator
More informationApproximation Algorithms
Approximation Algorithms What do you do when a problem is NP-complete? or, when the polynomial time solution is impractically slow? assume input is random, do expected performance. Eg, Hamiltonian path
More informationRandomized algorithm
Tutorial 4 Joyce 2009-11-24 Outline Solution to Midterm Question 1 Question 2 Question 1 Question 2 Question 3 Question 4 Question 5 Solution to Midterm Solution to Midterm Solution to Midterm Question
More informationAssignment 4: Solutions
Math 340: Discrete Structures II Assignment 4: Solutions. Random Walks. Consider a random walk on an connected, non-bipartite, undirected graph G. Show that, in the long run, the walk will traverse each
More informationHomework 4 Solutions
CS 174: Combinatorics and Discrete Probability Fall 01 Homework 4 Solutions Problem 1. (Exercise 3.4 from MU 5 points) Recall the randomized algorithm discussed in class for finding the median of a set
More informationCSE548, AMS542: Analysis of Algorithms, Spring 2014 Date: May 12. Final In-Class Exam. ( 2:35 PM 3:50 PM : 75 Minutes )
CSE548, AMS54: Analysis of Algorithms, Spring 014 Date: May 1 Final In-Class Exam ( :35 PM 3:50 PM : 75 Minutes ) This exam will account for either 15% or 30% of your overall grade depending on your relative
More informationRandomized Algorithms Week 2: Tail Inequalities
Randomized Algorithms Week 2: Tail Inequalities Rao Kosaraju In this section, we study three ways to estimate the tail probabilities of random variables. Please note that the more information we know about
More informationRandomized Algorithms I. Spring 2013, period III Jyrki Kivinen
582691 Randomized Algorithms I Spring 2013, period III Jyrki Kivinen 1 Position of the course in the studies 4 credits advanced course (syventävät opinnot) in algorithms and machine learning prerequisites:
More informationLecture 5: The Principle of Deferred Decisions. Chernoff Bounds
Randomized Algorithms Lecture 5: The Principle of Deferred Decisions. Chernoff Bounds Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized
More informationLecture 5: Probabilistic tools and Applications II
T-79.7003: Graphs and Networks Fall 2013 Lecture 5: Probabilistic tools and Applications II Lecturer: Charalampos E. Tsourakakis Oct. 11, 2013 5.1 Overview In the first part of today s lecture we will
More informationCSE 312 Foundations, II Final Exam
CSE 312 Foundations, II Final Exam 1 Anna Karlin June 11, 2014 DIRECTIONS: Closed book, closed notes except for one 8.5 11 sheet. Time limit 110 minutes. Calculators allowed. Grading will emphasize problem
More informationKousha Etessami. U. of Edinburgh, UK. Kousha Etessami (U. of Edinburgh, UK) Discrete Mathematics (Chapter 7) 1 / 13
Discrete Mathematics & Mathematical Reasoning Chapter 7 (continued): Markov and Chebyshev s Inequalities; and Examples in probability: the birthday problem Kousha Etessami U. of Edinburgh, UK Kousha Etessami
More informationSTAT/MATH 395 PROBABILITY II
STAT/MATH 395 PROBABILITY II Chapter 6 : Moment Functions Néhémy Lim 1 1 Department of Statistics, University of Washington, USA Winter Quarter 2016 of Common Distributions Outline 1 2 3 of Common Distributions
More informationPolytechnic Institute of NYU MA 2212 MIDTERM Feb 12, 2009
Polytechnic Institute of NYU MA 2212 MIDTERM Feb 12, 2009 Print Name: Signature: Section: ID #: Directions: You have 55 minutes to answer the following questions. You must show all your work as neatly
More informationFundamental Tools - Probability Theory IV
Fundamental Tools - Probability Theory IV MSc Financial Mathematics The University of Warwick October 1, 2015 MSc Financial Mathematics Fundamental Tools - Probability Theory IV 1 / 14 Model-independent
More informationSolutions to Problem Set 4
UC Berkeley, CS 174: Combinatorics and Discrete Probability (Fall 010 Solutions to Problem Set 4 1. (MU 5.4 In a lecture hall containing 100 people, you consider whether or not there are three people in
More informationDiscrete distribution. Fitting probability models to frequency data. Hypotheses for! 2 test. ! 2 Goodness-of-fit test
Discrete distribution Fitting probability models to frequency data A probability distribution describing a discrete numerical random variable For example,! Number of heads from 10 flips of a coin! Number
More informationCS6999 Probabilistic Methods in Integer Programming Randomized Rounding Andrew D. Smith April 2003
CS6999 Probabilistic Methods in Integer Programming Randomized Rounding April 2003 Overview 2 Background Randomized Rounding Handling Feasibility Derandomization Advanced Techniques Integer Programming
More informationCS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms
CS261: A Second Course in Algorithms Lecture #18: Five Essential Tools for the Analysis of Randomized Algorithms Tim Roughgarden March 3, 2016 1 Preamble In CS109 and CS161, you learned some tricks of
More informationThe Probabilistic Method
Lecture 3: Tail bounds, Probabilistic Method Today we will see what is nown as the probabilistic method for showing the existence of combinatorial objects. We also review basic concentration inequalities.
More informationAnother max flow application: baseball
CS124 Lecture 16 Spring 2018 Another max flow application: baseball Suppose there are n baseball teams, and team 1 is our favorite. It is the middle of baseball season, and some games have been played
More informationTwelfth Problem Assignment
EECS 401 Not Graded PROBLEM 1 Let X 1, X 2,... be a sequence of independent random variables that are uniformly distributed between 0 and 1. Consider a sequence defined by (a) Y n = max(x 1, X 2,..., X
More informationRandom Variable. Pr(X = a) = Pr(s)
Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably
More informationCSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions)
CSE 312: Foundations of Computing II Quiz Section #10: Review Questions for Final Exam (solutions) 1. (Confidence Intervals, CLT) Let X 1,..., X n be iid with unknown mean θ and known variance σ 2. Assume
More informationMath 180B Problem Set 3
Math 180B Problem Set 3 Problem 1. (Exercise 3.1.2) Solution. By the definition of conditional probabilities we have Pr{X 2 = 1, X 3 = 1 X 1 = 0} = Pr{X 3 = 1 X 2 = 1, X 1 = 0} Pr{X 2 = 1 X 1 = 0} = P
More informationCSE 312 Final Review: Section AA
CSE 312 TAs December 8, 2011 General Information General Information Comprehensive Midterm General Information Comprehensive Midterm Heavily weighted toward material after the midterm Pre-Midterm Material
More informationEE5319R: Problem Set 3 Assigned: 24/08/16, Due: 31/08/16
EE539R: Problem Set 3 Assigned: 24/08/6, Due: 3/08/6. Cover and Thomas: Problem 2.30 (Maimum Entropy): Solution: We are required to maimize H(P X ) over all distributions P X on the non-negative integers
More informationExample 1. The sample space of an experiment where we flip a pair of coins is denoted by:
Chapter 8 Probability 8. Preliminaries Definition (Sample Space). A Sample Space, Ω, is the set of all possible outcomes of an experiment. Such a sample space is considered discrete if Ω has finite cardinality.
More informationECEN 689 Special Topics in Data Science for Communications Networks
ECEN 689 Special Topics in Data Science for Communications Networks Nick Duffield Department of Electrical & Computer Engineering Texas A&M University Lecture 3 Estimation and Bounds Estimation from Packet
More informationRandom Variable. Discrete Random Variable. Continuous Random Variable. Discrete Random Variable. Discrete Probability Distribution
Random Variable Theoretical Probability Distribution Random Variable Discrete Probability Distributions A variable that assumes a numerical description for the outcome of a random eperiment (by chance).
More informationExam III #1 Solutions
Department of Mathematics University of Notre Dame Math 10120 Finite Math Fall 2017 Name: Instructors: Basit & Migliore Exam III #1 Solutions November 14, 2017 This exam is in two parts on 11 pages and
More informationInteger Linear Programs
Lecture 2: Review, Linear Programming Relaxations Today we will talk about expressing combinatorial problems as mathematical programs, specifically Integer Linear Programs (ILPs). We then see what happens
More informationThe Ellipsoid Algorithm
The Ellipsoid Algorithm John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA 9 February 2018 Mitchell The Ellipsoid Algorithm 1 / 28 Introduction Outline 1 Introduction 2 Assumptions
More informationMAT 135B Midterm 1 Solutions
MAT 35B Midterm Solutions Last Name (PRINT): First Name (PRINT): Student ID #: Section: Instructions:. Do not open your test until you are told to begin. 2. Use a pen to print your name in the spaces above.
More informationRandomized Complexity Classes; RP
Randomized Complexity Classes; RP Let N be a polynomial-time precise NTM that runs in time p(n) and has 2 nondeterministic choices at each step. N is a polynomial Monte Carlo Turing machine for a language
More informationLecture 5: January 30
CS71 Randomness & Computation Spring 018 Instructor: Alistair Sinclair Lecture 5: January 30 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They
More informationprinceton univ. F 13 cos 521: Advanced Algorithm Design Lecture 3: Large deviations bounds and applications Lecturer: Sanjeev Arora
prnceton unv. F 13 cos 521: Advanced Algorthm Desgn Lecture 3: Large devatons bounds and applcatons Lecturer: Sanjeev Arora Scrbe: Today s topc s devaton bounds: what s the probablty that a random varable
More informationJanson s Inequality and Poisson Heuristic
Janson s Inequality and Poisson Heuristic Dinesh K CS11M019 IIT Madras April 30, 2012 Dinesh (IITM) Janson s Inequality April 30, 2012 1 / 11 Outline 1 Motivation Dinesh (IITM) Janson s Inequality April
More informationSolution: By Markov inequality: P (X > 100) 0.8. By Chebyshev s inequality: P (X > 100) P ( X 80 > 20) 100/20 2 = The second estimate is better.
MA 485-1E, Probability (Dr Chernov) Final Exam Wed, Dec 12, 2001 Student s name Be sure to show all your work. Each problem is 4 points. Full credit will be given for 9 problems (36 points). You are welcome
More informationNetwork Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique
More informationMathematical Statistics 1 Math A 6330
Mathematical Statistics 1 Math A 6330 Chapter 3 Common Families of Distributions Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 28, 2015 Outline 1 Subjects of Lecture 04
More informationExpected Value 7/7/2006
Expected Value 7/7/2006 Definition Let X be a numerically-valued discrete random variable with sample space Ω and distribution function m(x). The expected value E(X) is defined by E(X) = x Ω x m(x), provided
More informationHoeffding, Chernoff, Bennet, and Bernstein Bounds
Stat 928: Statistical Learning Theory Lecture: 6 Hoeffding, Chernoff, Bennet, Bernstein Bounds Instructor: Sham Kakade 1 Hoeffding s Bound We say X is a sub-gaussian rom variable if it has quadratically
More informationDiscrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20
CS 70 Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 20 Today we shall discuss a measure of how close a random variable tends to be to its expectation. But first we need to see how to compute
More informationThis exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.
Probability and Statistics FS 2017 Session Exam 22.08.2017 Time Limit: 180 Minutes Name: Student ID: This exam contains 13 pages (including this cover page) and 10 questions. A Formulae sheet is provided
More informationCSCI8980 Algorithmic Techniques for Big Data September 12, Lecture 2
CSCI8980 Algorithmic Techniques for Big Data September, 03 Dr. Barna Saha Lecture Scribe: Matt Nohelty Overview We continue our discussion on data streaming models where streams of elements are coming
More information1 A Lower Bound on Sample Complexity
COS 511: Theoretical Machine Learning Lecturer: Rob Schapire Lecture #7 Scribe: Chee Wei Tan February 25, 2008 1 A Lower Bound on Sample Complexity In the last lecture, we stopped at the lower bound on
More informationRandomized Algorithms. Zhou Jun
Randomized Algorithms Zhou Jun 1 Content 13.1 Contention Resolution 13.2 Global Minimum Cut 13.3 *Random Variables and Expectation 13.4 Randomized Approximation Algorithm for MAX 3- SAT 13.6 Hashing 13.7
More informationX = X X n, + X 2
CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk
More informationISyE 6739 Test 1 Solutions Summer 2015
1 NAME ISyE 6739 Test 1 Solutions Summer 2015 This test is 100 minutes long. You are allowed one cheat sheet. 1. (50 points) Short-Answer Questions (a) What is any subset of the sample space called? Solution:
More informationInduced subgraphs with many repeated degrees
Induced subgraphs with many repeated degrees Yair Caro Raphael Yuster arxiv:1811.071v1 [math.co] 17 Nov 018 Abstract Erdős, Fajtlowicz and Staton asked for the least integer f(k such that every graph with
More information1 Gambler s Ruin Problem
1 Gambler s Ruin Problem Consider a gambler who starts with an initial fortune of $1 and then on each successive gamble either wins $1 or loses $1 independent of the past with probabilities p and q = 1
More informationProbability Theory Overview and Analysis of Randomized Algorithms
Probability Theory Overview and Analysis of Randomized Algorithms Analysis of Algorithms Prepared by John Reif, Ph.D. Probability Theory Topics a) Random Variables: Binomial and Geometric b) Useful Probabilistic
More informationSimons Symposium on Approximation Algorithms, February 26, 2014
Solving Optimization Problems with Diseconomies of Scale via Decoupling Konstantin Makarychev Microsoft Research Maxim Sviridenko Yahoo Labs Simons Symposium on Approximation Algorithms, February 26, 2014
More informationEssentials on the Analysis of Randomized Algorithms
Essentials on the Analysis of Randomized Algorithms Dimitris Diochnos Feb 0, 2009 Abstract These notes were written with Monte Carlo algorithms primarily in mind. Topics covered are basic (discrete) random
More informationLecture 8: Column Generation
Lecture 8: Column Generation (3 units) Outline Cutting stock problem Classical IP formulation Set covering formulation Column generation A dual perspective Vehicle routing problem 1 / 33 Cutting stock
More informationSUR LE CALCUL DES PROBABILITÉS
MÉMOIRE SUR LE CALCUL DES PROBABILITÉS M. le MARQUIS DE CONDORCET Histoire de l Académie des Sciences des Paris, 78 Parts & 2, pp. 707-728 FIRST PART Reflections on the general rule which prescribes to
More informationCS 5014: Research Methods in Computer Science. Bernoulli Distribution. Binomial Distribution. Poisson Distribution. Clifford A. Shaffer.
Department of Computer Science Virginia Tech Blacksburg, Virginia Copyright c 2015 by Clifford A. Shaffer Computer Science Title page Computer Science Clifford A. Shaffer Fall 2015 Clifford A. Shaffer
More information1. Let X be a random variable with probability density function. 1 x < f(x) = 0 otherwise
Name M36K Final. Let X be a random variable with probability density function { /x x < f(x = 0 otherwise Compute the following. You can leave your answers in integral form. (a ( points Find F X (t = P
More informationLecture 6,7 (Sept 27 and 29, 2011 ): Bin Packing, MAX-SAT
,7 CMPUT 675: Approximation Algorithms Fall 2011 Lecture 6,7 (Sept 27 and 29, 2011 ): Bin Pacing, MAX-SAT Lecturer: Mohammad R. Salavatipour Scribe: Weitian Tong 6.1 Bin Pacing Problem Recall the bin pacing
More informationConvergence of Random Processes
Convergence of Random Processes DS GA 1002 Probability and Statistics for Data Science http://www.cims.nyu.edu/~cfgranda/pages/dsga1002_fall17 Carlos Fernandez-Granda Aim Define convergence for random
More informationWeek 12-13: Discrete Probability
Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible
More informationA Probability Primer. A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes.
A Probability Primer A random walk down a probabilistic path leading to some stochastic thoughts on chance events and uncertain outcomes. Are you holding all the cards?? Random Events A random event, E,
More informationA Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover
A Linear Round Lower Bound for Lovasz-Schrijver SDP Relaxations of Vertex Cover Grant Schoenebeck Luca Trevisan Madhur Tulsiani Abstract We study semidefinite programming relaxations of Vertex Cover arising
More informationLecture 2 Binomial and Poisson Probability Distributions
Binomial Probability Distribution Lecture 2 Binomial and Poisson Probability Distributions Consider a situation where there are only two possible outcomes (a Bernoulli trial) Example: flipping a coin James
More informationDiscrete Mathematics for CS Spring 2006 Vazirani Lecture 22
CS 70 Discrete Mathematics for CS Spring 2006 Vazirani Lecture 22 Random Variables and Expectation Question: The homeworks of 20 students are collected in, randomly shuffled and returned to the students.
More informationMATH Notebook 5 Fall 2018/2019
MATH442601 2 Notebook 5 Fall 2018/2019 prepared by Professor Jenny Baglivo c Copyright 2004-2019 by Jenny A. Baglivo. All Rights Reserved. 5 MATH442601 2 Notebook 5 3 5.1 Sequences of IID Random Variables.............................
More informationCS70: Jean Walrand: Lecture 19.
CS70: Jean Walrand: Lecture 19. Random Variables: Expectation 1. Random Variables: Brief Review 2. Expectation 3. Important Distributions Random Variables: Definitions Definition A random variable, X,
More informationQuestion Points Score Total: 137
Math 447 Test 1 SOLUTIONS Fall 2015 No books, no notes, only SOA-approved calculators. true/false or fill-in-the-blank question. You must show work, unless the question is a Name: Section: Question Points
More information(b). What is an expression for the exact value of P(X = 4)? 2. (a). Suppose that the moment generating function for X is M (t) = 2et +1 3
Math 511 Exam #2 Show All Work 1. A package of 200 seeds contains 40 that are defective and will not grow (the rest are fine). Suppose that you choose a sample of 10 seeds from the box without replacement.
More informationStat 100a, Introduction to Probability.
Stat 100a, Introduction to Probability. Outline for the day: 1. Geometric random variables. 2. Negative binomial random variables. 3. Moment generating functions. 4. Poisson random variables. 5. Continuous
More informationLecture 4: Two-point Sampling, Coupon Collector s problem
Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms
More informationAndrew/CS ID: Midterm Solutions, Fall 2006
Name: Andrew/CS ID: 15-780 Midterm Solutions, Fall 2006 November 15, 2006 Place your name and your andrew/cs email address on the front page. The exam is open-book, open-notes, no electronics other than
More informationSTAT 516 Midterm Exam 2 Friday, March 7, 2008
STAT 516 Midterm Exam 2 Friday, March 7, 2008 Name Purdue student ID (10 digits) 1. The testing booklet contains 8 questions. 2. Permitted Texas Instruments calculators: BA-35 BA II Plus BA II Plus Professional
More informationExample continued. Math 425 Intro to Probability Lecture 37. Example continued. Example
continued : Coin tossing Math 425 Intro to Probability Lecture 37 Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan April 8, 2009 Consider a Bernoulli trials process with
More information- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs
LP-Duality ( Approximation Algorithms by V. Vazirani, Chapter 12) - Well-characterized problems, min-max relations, approximate certificates - LP problems in the standard form, primal and dual linear programs
More informationProbability Revealing Samples
Krzysztof Onak IBM Research Xiaorui Sun Microsoft Research Abstract In the most popular distribution testing and parameter estimation model, one can obtain information about an underlying distribution
More informationCMPUT 675: Approximation Algorithms Fall 2014
CMPUT 675: Approximation Algorithms Fall 204 Lecture 25 (Nov 3 & 5): Group Steiner Tree Lecturer: Zachary Friggstad Scribe: Zachary Friggstad 25. Group Steiner Tree In this problem, we are given a graph
More informationMATH 19B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 2010
MATH 9B FINAL EXAM PROBABILITY REVIEW PROBLEMS SPRING, 00 This handout is meant to provide a collection of exercises that use the material from the probability and statistics portion of the course The
More informationExpectation of Random Variables
1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete
More informationNotes for Math 324, Part 17
126 Notes for Math 324, Part 17 Chapter 17 Common discrete distributions 17.1 Binomial Consider an experiment consisting by a series of trials. The only possible outcomes of the trials are success and
More informationRandomized Computation
Randomized Computation Slides based on S.Aurora, B.Barak. Complexity Theory: A Modern Approach. Ahto Buldas Ahto.Buldas@ut.ee We do not assume anything about the distribution of the instances of the problem
More informationBandits, Experts, and Games
Bandits, Experts, and Games CMSC 858G Fall 2016 University of Maryland Intro to Probability* Alex Slivkins Microsoft Research NYC * Many of the slides adopted from Ron Jin and Mohammad Hajiaghayi Outline
More informationApplications of on-line prediction. in telecommunication problems
Applications of on-line prediction in telecommunication problems Gábor Lugosi Pompeu Fabra University, Barcelona based on joint work with András György and Tamás Linder 1 Outline On-line prediction; Some
More informationDistributed Distance-Bounded Network Design Through Distributed Convex Programming
Distributed Distance-Bounded Network Design Through Distributed Convex Programming OPODIS 2017 Michael Dinitz, Yasamin Nazari Johns Hopkins University December 18, 2017 Distance Bounded Network Design
More informationTo Catch a Falling Robber
To Catch a Falling Robber William B. Kinnersley, Pawe l Pra lat,and Douglas B. West arxiv:1406.8v3 [math.co] 3 Feb 016 Abstract We consider a Cops-and-Robber game played on the subsets of an n-set. The
More informationEECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014.
EECS 126 Probability and Random Processes University of California, Berkeley: Fall 2014 Kannan Ramchandran November 13, 2014 Midterm Exam 2 Last name First name SID Rules. DO NOT open the exam until instructed
More informationIntroduction to Probability
LECTURE NOTES Course 6.041-6.431 M.I.T. FALL 2000 Introduction to Probability Dimitri P. Bertsekas and John N. Tsitsiklis Professors of Electrical Engineering and Computer Science Massachusetts Institute
More informationAnswers to selected exercises
Answers to selected exercises A First Course in Stochastic Models, Henk C. Tijms 1.1 ( ) 1.2 (a) Let waiting time if passengers already arrived,. Then,, (b) { (c) Long-run fraction for is (d) Let waiting
More informationWeek 9 The Central Limit Theorem and Estimation Concepts
Week 9 and Estimation Concepts Week 9 and Estimation Concepts Week 9 Objectives 1 The Law of Large Numbers and the concept of consistency of averages are introduced. The condition of existence of the population
More informationCOS 341: Discrete Mathematics
COS 341: Discrete Mathematics Final Exam Fall 2006 Print your name General directions: This exam is due on Monday, January 22 at 4:30pm. Late exams will not be accepted. Exams must be submitted in hard
More informationLecture 2 Sep 5, 2017
CS 388R: Randomized Algorithms Fall 2017 Lecture 2 Sep 5, 2017 Prof. Eric Price Scribe: V. Orestis Papadigenopoulos and Patrick Rall NOTE: THESE NOTES HAVE NOT BEEN EDITED OR CHECKED FOR CORRECTNESS 1
More information