Lecture 14: March 1. approaches a standard normal distribution N(0, 1). Thus, as n, for any fixed β > 0 we have. e t2 2 dt

Size: px
Start display at page:

Download "Lecture 14: March 1. approaches a standard normal distribution N(0, 1). Thus, as n, for any fixed β > 0 we have. e t2 2 dt"

Transcription

1 CS271 Radomess & Computatio Sprig 2018 Istructor: Alistair Siclair Lecture 14: March 1 Disclaimer: These otes have ot bee subjected to the usual scrutiy accorded to formal publicatios. They may be distributed outside this class oly with the permissio of the Istructor. So far we have talked about the expectatio of radom variables, ad moved o to the variace or the secod momet. However, either of these methods is strog eough to produce the tight bouds eeded for some applicatios. We have already stated that if all you are give is the variace of a r.v., the Chebyshev s Iequality is essetially the best you ca do. Thus, more iformatio is eeded to acheive substatially tighter bouds. We begi with the case i which the r.v. is the sum of a sequece of idepedet r.v. s; this case arises very frequetly i applicatios Cheroff/Hoeffdig Bouds These methods are usually referred to as Cheroff bouds, but should probably be referred to as Hoeffdig bouds. Cheroff established the bouds for the domai of idepedet coi flips [C52], but it was Hoeffdig who exteded them to the geeral case [H63]. The method of proof used here is due to Berstei. Let X 1... X be i.i.d. radom variables. E[X i ] = µ i, ad Var[X i ] = σ 2 i, with all µ i ad all σ i equal. Let X = i=1 X i. What ca we say about the distributio of X? For illustratio, suppose each X i is a flip of the same (possibly biased coi. Idetifyig Heads with 1 ad Tails with 0, we are iterested i the distributio of the total umber of heads, X, i flips. Let by liearity of expectatio. Let µ = E[X] = µ i σ 2 = Var[X] = σ 2 i by liearity of the variace of idepedet radom variables. The Cetral Limit Theorem states that as, X µ σ approaches a stadard ormal distributio N(0, 1. Thus, as, for ay fixed β > 0 we have Pr[ X µ > βσ] 1 2π β e t2 2 dt 1 2πβ e β2 2. Note that the probability of a β-deviatio from the mea (i uits of the stadard deviatio σ decays expoetially with β. Ufortuately, there are two deficiecies i this boud for typical combiatorial ad computatioal applicatios: The above result is asymptotic oly, ad says othig about the rate of covergece (i.e., the behavior for fiite. The above result applies oly to deviatios o the order of the stadard deviatio, while we would like to be able to talk about arbitrary deviatios. 14-1

2 14-2 Lecture 14: March 1 Cheroff/Hoeffidig bouds deal with both of these deficiecies. Here is the most geeral (though ot ofte used form of the bouds: Theorem 14.1 Let X 1... X be idepedet 0-1 r.v.s with E[X i ] = p i (ot ecessarily equal. Let X = i=1 X i, µ = E[X] = i=1 p i, ad p = µ (the average of the p i. 1. Pr[X µ + λ] exp{ H p (p + λ } for 0 < λ < µ; 2. Pr[X µ λ] exp{ H 1 p (1 p + λ } for 0 < λ < µ, where H p (x = x l( x p 1 x + (1 x l( 1 p is the relative etropy of x with respect to p. The theorem gives bouds o the probabilities of deviatig by λ above or below the mea. Note that the relative etropy is always positive (ad is zero iff x = p. The relative etropy H p (x ca be viewed as a measure of distace betwee the two-poit distributios (p, 1 p ad (x, 1 x. Thus the quatity appearig i the expoet of the tail probability i the theorem ca be iterpreted as the distace betwee the true two-poit distributio (p, 1 p ad the distributio (p + λ, 1 p λ, i which the mea has bee shifted to the tail value p + λ. Proof: The bouds i 1 ad 2 are symmetrical (replace x by x to get 2 from 1. Therefore, we eed oly prove 1. Let m = µ + λ > µ. We are tryig to estimate Pr[X m]. The geeral steps we will use to obtai the boud (ad which ca be used i other, similar scearios are as follows: Expoetiate both sides of the iequality X m, with a free parameter t. Laplace trasform. (This is like takig a Apply Markov s Iequality. Use the idepedece of the X i to trasform the expectatio of a sum ito a product of expectatios. Plug i specific iformatio about the distributio of each X i to evaluate the expectatios. Use cocavity to boud the resultig product as a fuctio of p rather tha of the idividual p i. Miimize the fial boud as a fuctio of t, usig basic calculus. Here is the derivatio: Pr[X m] = Pr[e tx e tm ] for ay t > 0 (to be chose later e tm E[e tx ] by Markov s iequality = e tm E[e txi ] by idepedece of the X i i=1 = e tm (e t p i + 1 p i from the distributios of the X i (14.1 i=1 e tm (e t p + 1 p by the Arithmetic Mea/Geometric Mea iequality More geerally, we ca view the last step as a applicatio of cocavity of the fuctio γ(z = l((e t 1z+1.

3 Lecture 14: March Fially, we eed to optimize this boud over t. Rewritig the fial expressio above as exp{ l(pe t + (1 p tm} ad differetiatig w.r.t. t, we fid that the miimum is attaied whe e t = m(1 p ( mp (ad ote that this is ideed > 1, so t > 0 as required. Pluggig this value i, we fid that { ( ( } Pr[X m] exp l m(1 p ( m + 1 p m l m(1 p ( mp. At this poit, we eed oly massage this formula ito the right form to get our result. It equals { ( } exp l( 1 p 1 m/ m (1 pm/ l( (1 m/p { ( (1 ( ( } = exp m l 1 p 1 m/ + m l p m/ = exp { ( } H p p + λ by the defiitio of etropy ad the substitutio m = µ + λ, which is equivalet to m = µ+λ = p + λ. This proves part 1 of Theorem 14.1, ad hece also part 2 by symmetry. Note that the above theorem ad proof applies equally to idepedet r.v. s X i takig ay values i the iterval [0, 1] (ot just the values 0 ad 1. The oly step i the proof where we used the distributios of the X i was step (14.1, ad oe ca check that this still holds (with = replaced by for ay X i o [0, 1] (i.e., 0-1 is the worst case, for give mea p i. Similarly, a aalogous boud holds whe the X i live o ay bouded itervals [a i, b i ] (ad the the quatities a i b i will appear i the boud Exercise. I fact, oe ca obtai similar bouds eve whe the X i are ubouded, provided their distributios fall off quickly eough, as i the case (e.g. of a geometric radom variable Some Corollaries We ow give some more useful versios of the Cheroff boud, all obtaied from the above basic form. Corollary 14.2 Pr[X µ λ] Pr[X µ + λ] } exp ( 2λ2. Proof: (Sketch. We cosider oly the upper tail; the lower tail follows by symmetry. Writig z = λ, ad comparig the expoet i part 1 of Theorem 14.1 with our target value 2λ2, we eed to show that ( p + z ( 1 p z f(z (p + z l p + (1 p z l 1 p 2z 2 0 o the iterval 0 z 1 p. To do this we ote that f(0 = 0, so it is eough to show that f(z is o-decreasig o this iterval. This ca be verified by lookig at the first ad secod derivatives of f. The details are left as a exercise. We ow prove the followig alterative corollary, due to Aglui ad Valiat [AV79], which is ever much worse ad is much sharper i cases where µ (e.g., µ = Θ(log. Corollary 14.3 For 0 < β < 1 we have Pr[X (1 βµ] exp{ µ(β + (1 β l(1 β} exp( β2 µ 2

4 14-4 Lecture 14: March 1 Ad for β > 0 we have Pr[X (1 + βµ] exp{ µ( β { + (1 + β l(1 + β} ( exp( β2 µ 2+β β > 0 exp( β2 µ 3 0 < β 1 Proof: We first prove the boud o the lower tail. Pluggig i λ = βµ = βp ito part 2 of Theorem 14.1 gives Pr[X (1 βµ] exp{ H 1 p (1 p + βp} (14.2 { ( ( 1 p ( p } = exp (1 p + βp l + (p βp l 1 p + βp p βp { ( ( 1 } exp βp + p(1 β l ( β = exp{ µ(β + (1 β l(1 β} { } exp µ (β β + β2 ( } = exp { µβ2. 2 The iequality i (14.2 comes directly from Theorem Iequality (14.3 comes from the fact that ( 1 p ( βp βp l = l 1 1 p + βp 1 p + βp 1 p + βp usig l(1 x < x. Fially, iequality (14.4 is due to the fact that (1 x l(1 x x + x 2 /2 for 0 < x < 1. The proof for the upper tail is very similar. The first step is to plug i λ = βµ ito part 1 of Theorem 14.1 ad follow the same steps as above. The oly differece is that this time we use the iequality l(1 + x < x to get (, ad the the iequality l(1 + β > to get the fial form. The details are left as a exercise. β 1 + β/2 Exercise: Is this boud ever worse tha that of Corollary 14.2? If so, by how much? Exercise: Verify that Theorem 14.1 (ad therefore also the above Corollaries hold whe the X i are ot ecessarily 0,1-valued but ca take ay values i the rage [0, 1], subject oly to their expectatios beig p i. [Hit: Show that, if Y is ay r.v. takig values i [0, 1], ad Z is a (0,1-valued r.v. with EZ = EY, the for ay covex fuctio f we have E(f(Y E(f(Z. Apply this fact to the fuctio f(x = e tx at a appropriate poit i the proof of Theorem 14.1.] Exercise: Suppose that the r.v. s are idepedet ad X i takes values i the iterval [a i, b i ]. Prove the followig geeralizatio of Corollary } ( Pr[X µ λ] 2λ 2 exp Pr[X µ + λ] i=1 (b. i a i 2 [Hit: You will eed to go back ad tweak the proof of Theorem Is there a simpler scalig proof whe all the itervals [a i, b i ] are the same?]

5 Lecture 14: March Simple Examples Example 14.4 Fair coi flips. Give a coi where each flip has a probability p i = p = 1/2 of comig up heads, Corollary 14.2 gives the followig boud o the probability that the umber of heads X differs from the expected umber µ = /2 by more tha λ: Pr[ X µ λ] 2e 2λ2. Note that the stadard deviatio is σ = p(1 p = /2. So for λ = βσ the deviatio probability is 2e β2 /2. This is similar to the asymptotic boud obtaied from the CLT (but is valid for all. Example 14.5 Biased coi flips. I this example, the bit flips still all have the same probability, but ow the probability of comig up heads is p i = p = 3/4. We ca the ask for the value of Pr[ 1/2 of the flips come up heads ] The expected umber of heads is µ = 3/4. Thus, usig Corollary 14.3 with β = 1/3 we get Pr[X /2] = Pr[X µ /4] (14.5 = Pr[X (1 1/3µ] (14.6 exp( (1/ (14.7 = e 24. (14.8 (A similar boud, with a slightly better costat i the expoet, is obtaied by usig Corollary Recall that we have already used a boud of this form twice i the course: i boostig the success probability of two-sided error radomized algorithms (BPP algorithms, ad i the aalysis of the media trick for fully polyomial radomized approximatio schemes Radomized Routig We will ow see a example of a radomized algorithm whose aalysis makes use of the above Cheroff bouds. The problem is defied as follows. Cosider the etwork defied by the -dimesioal hypercube: i.e, the vertices of the etwork are the strigs i {0, 1}, ad edges coect pairs of vertices that differ i exactly oe bit. We shall thik of each edge as cosistig of two liks, oe i each directio. Let N = 2, the umber of vertices. Now let π be ay permutatio o the vertices of the cube. The goal is to sed oe packet from each i to its correspodig π(i, for all i simultaeously. This problem ca be see as a buildig block for more realistic routig applicatios. A strategy for routig permutatios o a graph ca give useful ispiratio for solvig similar problems o real etworks. We will use a sychroous model, i.e., the routig occurs i discrete time steps, ad i each time step, oe packet is allowed to travel alog each (directed edge. If more tha oe packet wishes to traverse a give edge i the same time step, all but oe of these packets are held i a queue at the edge. We assume ay fair queueig disciplie (e.g., FIFO. The goal is to miimize the total time before all packets have reached their destiatios. A priori, a packet oly has to travel O( steps (the diameter of the cube. However, due to the potetial for cogestio o the edges, it is possible that the process will take much loger tha this as packets get delayed i the queues.

6 14-6 Lecture 14: March 1 π(i i Figure 14.1: Radomized routig i a hypercube. Defiitio 14.6 A oblivious strategy is oe i which the route chose for each packet does ot deped o the routes of the other packets. That is, the path from i to π(i is a fuctio of i ad π(i oly. A oblivious routig strategy is thus oe i which there is o global sychroizatio, a realistic costrait if we are iterested i real-world problems. Theorem 14.7 [KKT90] For ay determiistic, oblivious routig strategy o the hypercube, there exists a permutatio that requires Ω( N/ = Ω( 2 / steps. This is quite a udesirable worst case. Fortuately, radomizatio ca provide a dramatic improvemet o this lower boud. Theorem 14.8 [VB81] There exists a radomized, oblivious routig strategy that termiates i O( steps with very high probability. We ow sketch this radomized strategy, which cosists of two phases. I phase 1, each packet i is routed to δ(i, where the destiatio δ(i is chose u.a.r. I phase 2, each packet is routed from δ(i to its desired fial destiatio, π(i. I both phases, we use bit-fixig paths to route the packets. I the bit-fixig path from vertex x to vertex y of the cube, we flip each bit x i to y i (if ecessary i left-to-right order. For example, a bit-fixig path from x = to y = i the hypercube {0, 1} 7 is x = = y. Bit-fixig paths are always shortest paths. Note that δ is ot required to be a permutatio (i.e., differet packets may share the same itermediate destiatio, so this strategy is oblivious. This strategy breaks the symmetry i the problem by simply choosig a radom itermediate destiatio for each packet. This makes it impossible for a adversary with kowledge of the strategy to egieer a bad permutatio. I our aalysis, we will see that each phase of this algorithm takes oly O( steps with high probability. I order to do this, we will take a uio boud over all 2 packets, so we will eed a expoetially small probability of a sigle packet takig a log time. This is where Cheroff/Hoeffdig bouds will be required.

7 Lecture 14: March Proof: Phase 1 starts from a fixed source i ad routes to a radom destiatio δ(i. Phase 2 starts from a radom source δ(i ad routes to a fixed destiatio π(i. By symmetry, it suffices to prove that phase 1 termiates i O( steps w.h.p. Let D(i be the delay suffered by packet i; the the total time take is at most + max i D(i. We will soo prove: i Pr[D(i > c] e 2. (14.9 Takig a uio boud, it follows that Pr[ i : D(i > c] 2 e 2 < 2. It remais to prove (14.9. For a sigle packet travelig from i δ(i, let P i = (e 1, e 2,..., e k be the sequece of edges o the route take by packet i. Let S i = {j i : P j P i }, the set of packets whose routes itersect P i. The guts of the proof lie i the followig claim. Claim 14.9 D(i S(i. Proof: First ote that, i the hypercube, whe two bit-fixig paths diverge they will ot come together agai; i.e., routes which itersect will itersect oly i oe cotiguous segmet. With this observatio, we ca ow charge each uit of delay for packet i to a distict member of S(i. P i i e 1 e e e k 2 5 e 3 e 4 δ(i P j j Figure 14.2: Routes P i ad P j i the hypercube. Defiitio Let j be a packet i S(i {i}. The lag of packet j at the start of time step t is t l, where e l is the ext edge (o P i that packet j wats to traverse. Note that the lag of packet i proceeds through the sequece of values 0, 1,..., D(i. Cosider the time step t whe the lag goes from L to L + 1; suppose this happes whe i is waitig to traverse edge e l. The i must be held up i the queue at e l (else its lag would ot icrease, so there exists at least oe other packet at e l with lag L, ad this packet actually moves at step t. Now cosider the last time at which there exists ay packet with lag L: say this is time t. The some such packet must leave path P i (or reach its destiatio at time t, for at ay step amog all packets with a give lag at a give edge, at least oe must move (ad it would retai the same lag if it remaied o path P i. So we may charge this uit of delay to that packet. Each packet is charged at most oce, because it is charged oly whe it leaves P i (which, by the observatio at the start of the proof, happes oly oce. Proof: (of iequality (14.9 Defie H ij = { 1 Pi P j 0 otherwise

8 14-8 Lecture 14: March 1 By Claim 14.9, D(i j i H ij. Ad sice the H ij are idepedet we ca use a Cheroff boud to boud the tail probability of this sum. First, we eed a small detour to boud the mea, µ = E[ j i H ij] (sice the expectatios of the H ij are tricky to get hold of. Defie The, T (e l = # of paths P j that pass through edge e l (o path P i. E[total legth of all paths] E[T (e l ] = # directed edges = N (/2 = 1/2, N so the expected umber of paths that itersect P i is (by symmetry E[ j i H ij ] k/2 /2. (14.10 Note that we used the fact that E[T (e l ] does ot deped o l, which follows by symmetry. (Oe might thik this is false because we chose a particular bit orderig. However the orderig does ot matter because for every source ad every i with probability 1/2 we choose a sik such that the bit-fixig algorithm flips the i th bit. We ca ow apply the Cheroff boud (Aglui/Valiat versio: Pr[D(i (1 + βµ] exp ( β2 2 + β µ, where µ = E[D(i]. It is easy to check (exercise! that, for ay fixed value of the tail (1 + βµ, the above boud is worst whe µ is maximized. Thus we may plug i our upper boud µ /2 ad coclude that Pr[D(i (1 + β β2 2 ] exp( 2+β 2. Takig β = 6, Pr[D(i ] exp( 16 = exp( 9 4 exp( 2. Thus, usig a uio boud as idicated earlier, we see that all packets reach their destiatios i Phase 1 i at most = 9 2 steps w.h.p. The complete algorithm (phases 1 ad 2 thus termiates i 9 steps. (To keep the phases separate, we assume that all packets wait util time 9 2 before begiig Phase 2. This completes the proof that the radomized oblivious strategy routes ay permutatio i O( steps w.h.p. Refereces [AV79] [C52] D. Aglui ad L.G. Valiat, Fast probabilistic algorithms for Hamiltoia circuits ad matchigs, Joural of Computer ad System Scieces, No. 19, 1979, pp H. Cheroff, A Measure of Asymptotic Efficiecy for Tests of a Hypothesis Based o the Sum of Observatios, Aals of Mathematic Statistics, 23, 1952, pp

9 Lecture 14: March [H63] [KKT90] [VB81] W. Hoeffdig, Probability Iequalities for Sums of Bouded Radom Variables, America Statistical Associatio Joural, 1963, pp C. Kaklamais, D. Krizac ad A. Tsatilas, Tight Bouds for Oblivious Routig i the Hypercube, Proceedigs of the Symposium o Parallel Algorithms ad Architecture, 1990, pp L. G. Valiat ad G. J. Breber, Uiversal schemes for parallel commuicatio, Proceedigs of the 13th aual ACM Symposium o Theory of Computig, 1981, pp

Lecture 4: April 10, 2013

Lecture 4: April 10, 2013 TTIC/CMSC 1150 Mathematical Toolkit Sprig 01 Madhur Tulsiai Lecture 4: April 10, 01 Scribe: Haris Agelidakis 1 Chebyshev s Iequality recap I the previous lecture, we used Chebyshev s iequality to get a

More information

Lecture 5: April 17, 2013

Lecture 5: April 17, 2013 TTIC/CMSC 350 Mathematical Toolkit Sprig 203 Madhur Tulsiai Lecture 5: April 7, 203 Scribe: Somaye Hashemifar Cheroff bouds recap We recall the Cheroff/Hoeffdig bouds we derived i the last lecture idepedet

More information

Problem Set 2 Solutions

Problem Set 2 Solutions CS271 Radomess & Computatio, Sprig 2018 Problem Set 2 Solutios Poit totals are i the margi; the maximum total umber of poits was 52. 1. Probabilistic method for domiatig sets 6pts Pick a radom subset S

More information

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n,

n outcome is (+1,+1, 1,..., 1). Let the r.v. X denote our position (relative to our starting point 0) after n moves. Thus X = X 1 + X 2 + +X n, CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 9 Variace Questio: At each time step, I flip a fair coi. If it comes up Heads, I walk oe step to the right; if it comes up Tails, I walk oe

More information

An Introduction to Randomized Algorithms

An Introduction to Randomized Algorithms A Itroductio to Radomized Algorithms The focus of this lecture is to study a radomized algorithm for quick sort, aalyze it usig probabilistic recurrece relatios, ad also provide more geeral tools for aalysis

More information

Lecture 2: Concentration Bounds

Lecture 2: Concentration Bounds CSE 52: Desig ad Aalysis of Algorithms I Sprig 206 Lecture 2: Cocetratio Bouds Lecturer: Shaya Oveis Ghara March 30th Scribe: Syuzaa Sargsya Disclaimer: These otes have ot bee subjected to the usual scrutiy

More information

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018)

Randomized Algorithms I, Spring 2018, Department of Computer Science, University of Helsinki Homework 1: Solutions (Discussed January 25, 2018) Radomized Algorithms I, Sprig 08, Departmet of Computer Sciece, Uiversity of Helsiki Homework : Solutios Discussed Jauary 5, 08). Exercise.: Cosider the followig balls-ad-bi game. We start with oe black

More information

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22

Discrete Mathematics for CS Spring 2007 Luca Trevisan Lecture 22 CS 70 Discrete Mathematics for CS Sprig 2007 Luca Trevisa Lecture 22 Aother Importat Distributio The Geometric Distributio Questio: A biased coi with Heads probability p is tossed repeatedly util the first

More information

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss

ECE 901 Lecture 12: Complexity Regularization and the Squared Loss ECE 90 Lecture : Complexity Regularizatio ad the Squared Loss R. Nowak 5/7/009 I the previous lectures we made use of the Cheroff/Hoeffdig bouds for our aalysis of classifier errors. Hoeffdig s iequality

More information

Lecture 12: November 13, 2018

Lecture 12: November 13, 2018 Mathematical Toolkit Autum 2018 Lecturer: Madhur Tulsiai Lecture 12: November 13, 2018 1 Radomized polyomial idetity testig We will use our kowledge of coditioal probability to prove the followig lemma,

More information

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19

Discrete Mathematics and Probability Theory Spring 2016 Rao and Walrand Note 19 CS 70 Discrete Mathematics ad Probability Theory Sprig 2016 Rao ad Walrad Note 19 Some Importat Distributios Recall our basic probabilistic experimet of tossig a biased coi times. This is a very simple

More information

Learning Theory: Lecture Notes

Learning Theory: Lecture Notes Learig Theory: Lecture Notes Kamalika Chaudhuri October 4, 0 Cocetratio of Averages Cocetratio of measure is very useful i showig bouds o the errors of machie-learig algorithms. We will begi with a basic

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013. Large Deviations for i.i.d. Random Variables MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.265/15.070J Fall 2013 Lecture 2 9/9/2013 Large Deviatios for i.i.d. Radom Variables Cotet. Cheroff boud usig expoetial momet geeratig fuctios. Properties of a momet

More information

Rademacher Complexity

Rademacher Complexity EECS 598: Statistical Learig Theory, Witer 204 Topic 0 Rademacher Complexity Lecturer: Clayto Scott Scribe: Ya Deg, Kevi Moo Disclaimer: These otes have ot bee subjected to the usual scrutiy reserved for

More information

Lecture 3: August 31

Lecture 3: August 31 36-705: Itermediate Statistics Fall 018 Lecturer: Siva Balakrisha Lecture 3: August 31 This lecture will be mostly a summary of other useful expoetial tail bouds We will ot prove ay of these i lecture,

More information

Discrete Mathematics and Probability Theory Spring 2012 Alistair Sinclair Note 15

Discrete Mathematics and Probability Theory Spring 2012 Alistair Sinclair Note 15 CS 70 Discrete Mathematics ad Probability Theory Sprig 2012 Alistair Siclair Note 15 Some Importat Distributios The first importat distributio we leared about i the last Lecture Note is the biomial distributio

More information

Basics of Probability Theory (for Theory of Computation courses)

Basics of Probability Theory (for Theory of Computation courses) Basics of Probability Theory (for Theory of Computatio courses) Oded Goldreich Departmet of Computer Sciece Weizma Istitute of Sciece Rehovot, Israel. oded.goldreich@weizma.ac.il November 24, 2008 Preface.

More information

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 21. Some Important Distributions

Discrete Mathematics for CS Spring 2005 Clancy/Wagner Notes 21. Some Important Distributions CS 70 Discrete Mathematics for CS Sprig 2005 Clacy/Wager Notes 21 Some Importat Distributios Questio: A biased coi with Heads probability p is tossed repeatedly util the first Head appears. What is the

More information

Lecture 19: Convergence

Lecture 19: Convergence Lecture 19: Covergece Asymptotic approach I statistical aalysis or iferece, a key to the success of fidig a good procedure is beig able to fid some momets ad/or distributios of various statistics. I may

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 19 11/17/2008 LAWS OF LARGE NUMBERS II THE STRONG LAW OF LARGE NUMBERS MASSACHUSTTS INSTITUT OF TCHNOLOGY 6.436J/5.085J Fall 2008 Lecture 9 /7/2008 LAWS OF LARG NUMBRS II Cotets. The strog law of large umbers 2. The Cheroff boud TH STRONG LAW OF LARG NUMBRS While the weak

More information

HOMEWORK 2 SOLUTIONS

HOMEWORK 2 SOLUTIONS HOMEWORK SOLUTIONS CSE 55 RANDOMIZED AND APPROXIMATION ALGORITHMS 1. Questio 1. a) The larger the value of k is, the smaller the expected umber of days util we get all the coupos we eed. I fact if = k

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit Theorems Throughout this sectio we will assume a probability space (, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Discrete Mathematics for CS Spring 2008 David Wagner Note 22

Discrete Mathematics for CS Spring 2008 David Wagner Note 22 CS 70 Discrete Mathematics for CS Sprig 2008 David Wager Note 22 I.I.D. Radom Variables Estimatig the bias of a coi Questio: We wat to estimate the proportio p of Democrats i the US populatio, by takig

More information

Math 216A Notes, Week 5

Math 216A Notes, Week 5 Math 6A Notes, Week 5 Scribe: Ayastassia Sebolt Disclaimer: These otes are ot early as polished (ad quite possibly ot early as correct) as a published paper. Please use them at your ow risk.. Thresholds

More information

7.1 Convergence of sequences of random variables

7.1 Convergence of sequences of random variables Chapter 7 Limit theorems Throughout this sectio we will assume a probability space (Ω, F, P), i which is defied a ifiite sequece of radom variables (X ) ad a radom variable X. The fact that for every ifiite

More information

Optimally Sparse SVMs

Optimally Sparse SVMs A. Proof of Lemma 3. We here prove a lower boud o the umber of support vectors to achieve geeralizatio bouds of the form which we cosider. Importatly, this result holds ot oly for liear classifiers, but

More information

Lecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables

Lecture 01: the Central Limit Theorem. 1 Central Limit Theorem for i.i.d. random variables CSCI-B609: A Theorist s Toolkit, Fall 06 Aug 3 Lecture 0: the Cetral Limit Theorem Lecturer: Yua Zhou Scribe: Yua Xie & Yua Zhou Cetral Limit Theorem for iid radom variables Let us say that we wat to aalyze

More information

On Random Line Segments in the Unit Square

On Random Line Segments in the Unit Square O Radom Lie Segmets i the Uit Square Thomas A. Courtade Departmet of Electrical Egieerig Uiversity of Califoria Los Ageles, Califoria 90095 Email: tacourta@ee.ucla.edu I. INTRODUCTION Let Q = [0, 1] [0,

More information

1 of 7 7/16/2009 6:06 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 6. Order Statistics Defiitios Suppose agai that we have a basic radom experimet, ad that X is a real-valued radom variable

More information

Convergence of random variables. (telegram style notes) P.J.C. Spreij

Convergence of random variables. (telegram style notes) P.J.C. Spreij Covergece of radom variables (telegram style otes).j.c. Spreij this versio: September 6, 2005 Itroductio As we kow, radom variables are by defiitio measurable fuctios o some uderlyig measurable space

More information

Application to Random Graphs

Application to Random Graphs A Applicatio to Radom Graphs Brachig processes have a umber of iterestig ad importat applicatios. We shall cosider oe of the most famous of them, the Erdős-Réyi radom graph theory. 1 Defiitio A.1. Let

More information

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3

(A sequence also can be thought of as the list of function values attained for a function f :ℵ X, where f (n) = x n for n 1.) x 1 x N +k x N +4 x 3 MATH 337 Sequeces Dr. Neal, WKU Let X be a metric space with distace fuctio d. We shall defie the geeral cocept of sequece ad limit i a metric space, the apply the results i particular to some special

More information

Output Analysis (2, Chapters 10 &11 Law)

Output Analysis (2, Chapters 10 &11 Law) B. Maddah ENMG 6 Simulatio Output Aalysis (, Chapters 10 &11 Law) Comparig alterative system cofiguratio Sice the output of a simulatio is radom, the comparig differet systems via simulatio should be doe

More information

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15

Discrete Mathematics and Probability Theory Summer 2014 James Cook Note 15 CS 70 Discrete Mathematics ad Probability Theory Summer 2014 James Cook Note 15 Some Importat Distributios I this ote we will itroduce three importat probability distributios that are widely used to model

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

4. Partial Sums and the Central Limit Theorem

4. Partial Sums and the Central Limit Theorem 1 of 10 7/16/2009 6:05 AM Virtual Laboratories > 6. Radom Samples > 1 2 3 4 5 6 7 4. Partial Sums ad the Cetral Limit Theorem The cetral limit theorem ad the law of large umbers are the two fudametal theorems

More information

Lecture 10: Universal coding and prediction

Lecture 10: Universal coding and prediction 0-704: Iformatio Processig ad Learig Sprig 0 Lecture 0: Uiversal codig ad predictio Lecturer: Aarti Sigh Scribes: Georg M. Goerg Disclaimer: These otes have ot bee subjected to the usual scrutiy reserved

More information

Ada Boost, Risk Bounds, Concentration Inequalities. 1 AdaBoost and Estimates of Conditional Probabilities

Ada Boost, Risk Bounds, Concentration Inequalities. 1 AdaBoost and Estimates of Conditional Probabilities CS8B/Stat4B Sprig 008) Statistical Learig Theory Lecture: Ada Boost, Risk Bouds, Cocetratio Iequalities Lecturer: Peter Bartlett Scribe: Subhrasu Maji AdaBoost ad Estimates of Coditioal Probabilities We

More information

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT

TR/46 OCTOBER THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION A. TALBOT TR/46 OCTOBER 974 THE ZEROS OF PARTIAL SUMS OF A MACLAURIN EXPANSION by A. TALBOT .. Itroductio. A problem i approximatio theory o which I have recetly worked [] required for its solutio a proof that the

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2018 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

Sequences A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece 1, 1, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet

More information

Topic 9: Sampling Distributions of Estimators

Topic 9: Sampling Distributions of Estimators Topic 9: Samplig Distributios of Estimators Course 003, 2016 Page 0 Samplig distributios of estimators Sice our estimators are statistics (particular fuctios of radom variables), their distributio ca be

More information

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities

Chapter 5. Inequalities. 5.1 The Markov and Chebyshev inequalities Chapter 5 Iequalities 5.1 The Markov ad Chebyshev iequalities As you have probably see o today s frot page: every perso i the upper teth percetile ears at least 1 times more tha the average salary. I other

More information

Lecture 9: Expanders Part 2, Extractors

Lecture 9: Expanders Part 2, Extractors Lecture 9: Expaders Part, Extractors Topics i Complexity Theory ad Pseudoradomess Sprig 013 Rutgers Uiversity Swastik Kopparty Scribes: Jaso Perry, Joh Kim I this lecture, we will discuss further the pseudoradomess

More information

STAT Homework 1 - Solutions

STAT Homework 1 - Solutions STAT-36700 Homework 1 - Solutios Fall 018 September 11, 018 This cotais solutios for Homework 1. Please ote that we have icluded several additioal commets ad approaches to the problems to give you better

More information

1 Review and Overview

1 Review and Overview DRAFT a fial versio will be posted shortly CS229T/STATS231: Statistical Learig Theory Lecturer: Tegyu Ma Lecture #3 Scribe: Migda Qiao October 1, 2013 1 Review ad Overview I the first half of this course,

More information

Math 525: Lecture 5. January 18, 2018

Math 525: Lecture 5. January 18, 2018 Math 525: Lecture 5 Jauary 18, 2018 1 Series (review) Defiitio 1.1. A sequece (a ) R coverges to a poit L R (writte a L or lim a = L) if for each ǫ > 0, we ca fid N such that a L < ǫ for all N. If the

More information

Stochastic Simulation

Stochastic Simulation Stochastic Simulatio 1 Itroductio Readig Assigmet: Read Chapter 1 of text. We shall itroduce may of the key issues to be discussed i this course via a couple of model problems. Model Problem 1 (Jackso

More information

1 Introduction to reducing variance in Monte Carlo simulations

1 Introduction to reducing variance in Monte Carlo simulations Copyright c 010 by Karl Sigma 1 Itroductio to reducig variace i Mote Carlo simulatios 11 Review of cofidece itervals for estimatig a mea I statistics, we estimate a ukow mea µ = E(X) of a distributio by

More information

Lecture 4 February 16, 2016

Lecture 4 February 16, 2016 MIT 6.854/18.415: Advaced Algorithms Sprig 16 Prof. Akur Moitra Lecture 4 February 16, 16 Scribe: Be Eysebach, Devi Neal 1 Last Time Cosistet Hashig - hash fuctios that evolve well Radom Trees - routig

More information

EE 4TM4: Digital Communications II Probability Theory

EE 4TM4: Digital Communications II Probability Theory 1 EE 4TM4: Digital Commuicatios II Probability Theory I. RANDOM VARIABLES A radom variable is a real-valued fuctio defied o the sample space. Example: Suppose that our experimet cosists of tossig two fair

More information

4.3 Growth Rates of Solutions to Recurrences

4.3 Growth Rates of Solutions to Recurrences 4.3. GROWTH RATES OF SOLUTIONS TO RECURRENCES 81 4.3 Growth Rates of Solutios to Recurreces 4.3.1 Divide ad Coquer Algorithms Oe of the most basic ad powerful algorithmic techiques is divide ad coquer.

More information

1 Review and Overview

1 Review and Overview CS9T/STATS3: Statistical Learig Theory Lecturer: Tegyu Ma Lecture #6 Scribe: Jay Whag ad Patrick Cho October 0, 08 Review ad Overview Recall i the last lecture that for ay family of scalar fuctios F, we

More information

Seunghee Ye Ma 8: Week 5 Oct 28

Seunghee Ye Ma 8: Week 5 Oct 28 Week 5 Summary I Sectio, we go over the Mea Value Theorem ad its applicatios. I Sectio 2, we will recap what we have covered so far this term. Topics Page Mea Value Theorem. Applicatios of the Mea Value

More information

Distribution of Random Samples & Limit theorems

Distribution of Random Samples & Limit theorems STAT/MATH 395 A - PROBABILITY II UW Witer Quarter 2017 Néhémy Lim Distributio of Radom Samples & Limit theorems 1 Distributio of i.i.d. Samples Motivatig example. Assume that the goal of a study is to

More information

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1.

Econ 325/327 Notes on Sample Mean, Sample Proportion, Central Limit Theorem, Chi-square Distribution, Student s t distribution 1. Eco 325/327 Notes o Sample Mea, Sample Proportio, Cetral Limit Theorem, Chi-square Distributio, Studet s t distributio 1 Sample Mea By Hiro Kasahara We cosider a radom sample from a populatio. Defiitio

More information

This section is optional.

This section is optional. 4 Momet Geeratig Fuctios* This sectio is optioal. The momet geeratig fuctio g : R R of a radom variable X is defied as g(t) = E[e tx ]. Propositio 1. We have g () (0) = E[X ] for = 1, 2,... Proof. Therefore

More information

The standard deviation of the mean

The standard deviation of the mean Physics 6C Fall 20 The stadard deviatio of the mea These otes provide some clarificatio o the distictio betwee the stadard deviatio ad the stadard deviatio of the mea.. The sample mea ad variace Cosider

More information

Notes 27 : Brownian motion: path properties

Notes 27 : Brownian motion: path properties Notes 27 : Browia motio: path properties Math 733-734: Theory of Probability Lecturer: Sebastie Roch Refereces:[Dur10, Sectio 8.1], [MP10, Sectio 1.1, 1.2, 1.3]. Recall: DEF 27.1 (Covariace) Let X = (X

More information

Shannon s noiseless coding theorem

Shannon s noiseless coding theorem 18.310 lecture otes May 4, 2015 Shao s oiseless codig theorem Lecturer: Michel Goemas I these otes we discuss Shao s oiseless codig theorem, which is oe of the foudig results of the field of iformatio

More information

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS

January 25, 2017 INTRODUCTION TO MATHEMATICAL STATISTICS Jauary 25, 207 INTRODUCTION TO MATHEMATICAL STATISTICS Abstract. A basic itroductio to statistics assumig kowledge of probability theory.. Probability I a typical udergraduate problem i probability, we

More information

f X (12) = Pr(X = 12) = Pr({(6, 6)}) = 1/36

f X (12) = Pr(X = 12) = Pr({(6, 6)}) = 1/36 Probability Distributios A Example With Dice If X is a radom variable o sample space S, the the probablity that X takes o the value c is Similarly, Pr(X = c) = Pr({s S X(s) = c} Pr(X c) = Pr({s S X(s)

More information

Advanced Stochastic Processes.

Advanced Stochastic Processes. Advaced Stochastic Processes. David Gamarik LECTURE 2 Radom variables ad measurable fuctios. Strog Law of Large Numbers (SLLN). Scary stuff cotiued... Outlie of Lecture Radom variables ad measurable fuctios.

More information

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand Final Solutions

Discrete Mathematics and Probability Theory Fall 2016 Seshia and Walrand Final Solutions CS 70 Discrete Mathematics ad Probability Theory Fall 2016 Seshia ad Walrad Fial Solutios CS 70, Fall 2016, Fial Solutios 1 1 TRUE or FALSE?: 2x8=16 poits Clearly put your aswers i the aswer box o the

More information

Lecture 2: April 3, 2013

Lecture 2: April 3, 2013 TTIC/CMSC 350 Mathematical Toolkit Sprig 203 Madhur Tulsiai Lecture 2: April 3, 203 Scribe: Shubhedu Trivedi Coi tosses cotiued We retur to the coi tossig example from the last lecture agai: Example. Give,

More information

Sequences and Series of Functions

Sequences and Series of Functions Chapter 6 Sequeces ad Series of Fuctios 6.1. Covergece of a Sequece of Fuctios Poitwise Covergece. Defiitio 6.1. Let, for each N, fuctio f : A R be defied. If, for each x A, the sequece (f (x)) coverges

More information

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4

MATH 320: Probability and Statistics 9. Estimation and Testing of Parameters. Readings: Pruim, Chapter 4 MATH 30: Probability ad Statistics 9. Estimatio ad Testig of Parameters Estimatio ad Testig of Parameters We have bee dealig situatios i which we have full kowledge of the distributio of a radom variable.

More information

(b) What is the probability that a particle reaches the upper boundary n before the lower boundary m?

(b) What is the probability that a particle reaches the upper boundary n before the lower boundary m? MATH 529 The Boudary Problem The drukard s walk (or boudary problem) is oe of the most famous problems i the theory of radom walks. Oe versio of the problem is described as follows: Suppose a particle

More information

10-701/ Machine Learning Mid-term Exam Solution

10-701/ Machine Learning Mid-term Exam Solution 0-70/5-78 Machie Learig Mid-term Exam Solutio Your Name: Your Adrew ID: True or False (Give oe setece explaatio) (20%). (F) For a cotiuous radom variable x ad its probability distributio fuctio p(x), it

More information

Lecture 7: October 18, 2017

Lecture 7: October 18, 2017 Iformatio ad Codig Theory Autum 207 Lecturer: Madhur Tulsiai Lecture 7: October 8, 207 Biary hypothesis testig I this lecture, we apply the tools developed i the past few lectures to uderstad the problem

More information

Lecture 2. The Lovász Local Lemma

Lecture 2. The Lovász Local Lemma Staford Uiversity Sprig 208 Math 233A: No-costructive methods i combiatorics Istructor: Ja Vodrák Lecture date: Jauary 0, 208 Origial scribe: Apoorva Khare Lecture 2. The Lovász Local Lemma 2. Itroductio

More information

Lecture Chapter 6: Convergence of Random Sequences

Lecture Chapter 6: Convergence of Random Sequences ECE5: Aalysis of Radom Sigals Fall 6 Lecture Chapter 6: Covergece of Radom Sequeces Dr Salim El Rouayheb Scribe: Abhay Ashutosh Doel, Qibo Zhag, Peiwe Tia, Pegzhe Wag, Lu Liu Radom sequece Defiitio A ifiite

More information

Lecture 2 February 8, 2016

Lecture 2 February 8, 2016 MIT 6.854/8.45: Advaced Algorithms Sprig 206 Prof. Akur Moitra Lecture 2 February 8, 206 Scribe: Calvi Huag, Lih V. Nguye I this lecture, we aalyze the problem of schedulig equal size tasks arrivig olie

More information

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER / Statistics ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 1 MATH00030 SEMESTER 1 018/019 DR. ANTHONY BROWN 8. Statistics 8.1. Measures of Cetre: Mea, Media ad Mode. If we have a series of umbers the

More information

The Random Walk For Dummies

The Random Walk For Dummies The Radom Walk For Dummies Richard A Mote Abstract We look at the priciples goverig the oe-dimesioal discrete radom walk First we review five basic cocepts of probability theory The we cosider the Beroulli

More information

Expectation and Variance of a random variable

Expectation and Variance of a random variable Chapter 11 Expectatio ad Variace of a radom variable The aim of this lecture is to defie ad itroduce mathematical Expectatio ad variace of a fuctio of discrete & cotiuous radom variables ad the distributio

More information

2.2. Central limit theorem.

2.2. Central limit theorem. 36.. Cetral limit theorem. The most ideal case of the CLT is that the radom variables are iid with fiite variace. Although it is a special case of the more geeral Lideberg-Feller CLT, it is most stadard

More information

32 estimating the cumulative distribution function

32 estimating the cumulative distribution function 32 estimatig the cumulative distributio fuctio 4.6 types of cofidece itervals/bads Let F be a class of distributio fuctios F ad let θ be some quatity of iterest, such as the mea of F or the whole fuctio

More information

Math 2784 (or 2794W) University of Connecticut

Math 2784 (or 2794W) University of Connecticut ORDERS OF GROWTH PAT SMITH Math 2784 (or 2794W) Uiversity of Coecticut Date: Mar. 2, 22. ORDERS OF GROWTH. Itroductio Gaiig a ituitive feel for the relative growth of fuctios is importat if you really

More information

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)].

Probability 2 - Notes 10. Lemma. If X is a random variable and g(x) 0 for all x in the support of f X, then P(g(X) 1) E[g(X)]. Probability 2 - Notes 0 Some Useful Iequalities. Lemma. If X is a radom variable ad g(x 0 for all x i the support of f X, the P(g(X E[g(X]. Proof. (cotiuous case P(g(X Corollaries x:g(x f X (xdx x:g(x

More information

6.883: Online Methods in Machine Learning Alexander Rakhlin

6.883: Online Methods in Machine Learning Alexander Rakhlin 6.883: Olie Methods i Machie Learig Alexader Rakhli LECTURES 5 AND 6. THE EXPERTS SETTING. EXPONENTIAL WEIGHTS All the algorithms preseted so far halluciate the future values as radom draws ad the perform

More information

( ) = p and P( i = b) = q.

( ) = p and P( i = b) = q. MATH 540 Radom Walks Part 1 A radom walk X is special stochastic process that measures the height (or value) of a particle that radomly moves upward or dowward certai fixed amouts o each uit icremet of

More information

CS 330 Discussion - Probability

CS 330 Discussion - Probability CS 330 Discussio - Probability March 24 2017 1 Fudametals of Probability 11 Radom Variables ad Evets A radom variable X is oe whose value is o-determiistic For example, suppose we flip a coi ad set X =

More information

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function.

It is always the case that unions, intersections, complements, and set differences are preserved by the inverse image of a function. MATH 532 Measurable Fuctios Dr. Neal, WKU Throughout, let ( X, F, µ) be a measure space ad let (!, F, P ) deote the special case of a probability space. We shall ow begi to study real-valued fuctios defied

More information

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence

A sequence of numbers is a function whose domain is the positive integers. We can see that the sequence Sequeces A sequece of umbers is a fuctio whose domai is the positive itegers. We ca see that the sequece,, 2, 2, 3, 3,... is a fuctio from the positive itegers whe we write the first sequece elemet as

More information

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam.

This exam contains 19 pages (including this cover page) and 10 questions. A Formulae sheet is provided with the exam. Probability ad Statistics FS 07 Secod Sessio Exam 09.0.08 Time Limit: 80 Miutes Name: Studet ID: This exam cotais 9 pages (icludig this cover page) ad 0 questios. A Formulae sheet is provided with the

More information

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample.

Statistical Inference (Chapter 10) Statistical inference = learn about a population based on the information provided by a sample. Statistical Iferece (Chapter 10) Statistical iferece = lear about a populatio based o the iformatio provided by a sample. Populatio: The set of all values of a radom variable X of iterest. Characterized

More information

Probability for mathematicians INDEPENDENCE TAU

Probability for mathematicians INDEPENDENCE TAU Probability for mathematicias INDEPENDENCE TAU 2013 28 Cotets 3 Ifiite idepedet sequeces 28 3a Idepedet evets........................ 28 3b Idepedet radom variables.................. 33 3 Ifiite idepedet

More information

Approximations and more PMFs and PDFs

Approximations and more PMFs and PDFs Approximatios ad more PMFs ad PDFs Saad Meimeh 1 Approximatio of biomial with Poisso Cosider the biomial distributio ( b(k,,p = p k (1 p k, k λ: k Assume that is large, ad p is small, but p λ at the limit.

More information

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman:

Problems from 9th edition of Probability and Statistical Inference by Hogg, Tanis and Zimmerman: Math 224 Fall 2017 Homework 4 Drew Armstrog Problems from 9th editio of Probability ad Statistical Iferece by Hogg, Tais ad Zimmerma: Sectio 2.3, Exercises 16(a,d),18. Sectio 2.4, Exercises 13, 14. Sectio

More information

Problem Set 4 Due Oct, 12

Problem Set 4 Due Oct, 12 EE226: Radom Processes i Systems Lecturer: Jea C. Walrad Problem Set 4 Due Oct, 12 Fall 06 GSI: Assae Gueye This problem set essetially reviews detectio theory ad hypothesis testig ad some basic otios

More information

Frequentist Inference

Frequentist Inference Frequetist Iferece The topics of the ext three sectios are useful applicatios of the Cetral Limit Theorem. Without kowig aythig about the uderlyig distributio of a sequece of radom variables {X i }, for

More information

Random Variables, Sampling and Estimation

Random Variables, Sampling and Estimation Chapter 1 Radom Variables, Samplig ad Estimatio 1.1 Itroductio This chapter will cover the most importat basic statistical theory you eed i order to uderstad the ecoometric material that will be comig

More information

Math 155 (Lecture 3)

Math 155 (Lecture 3) Math 55 (Lecture 3) September 8, I this lecture, we ll cosider the aswer to oe of the most basic coutig problems i combiatorics Questio How may ways are there to choose a -elemet subset of the set {,,,

More information

Lecture 2: Monte Carlo Simulation

Lecture 2: Monte Carlo Simulation STAT/Q SCI 43: Itroductio to Resamplig ethods Sprig 27 Istructor: Ye-Chi Che Lecture 2: ote Carlo Simulatio 2 ote Carlo Itegratio Assume we wat to evaluate the followig itegratio: e x3 dx What ca we do?

More information

Lecture 27. Capacity of additive Gaussian noise channel and the sphere packing bound

Lecture 27. Capacity of additive Gaussian noise channel and the sphere packing bound Lecture 7 Ageda for the lecture Gaussia chael with average power costraits Capacity of additive Gaussia oise chael ad the sphere packig boud 7. Additive Gaussia oise chael Up to this poit, we have bee

More information

Machine Learning for Data Science (CS 4786)

Machine Learning for Data Science (CS 4786) Machie Learig for Data Sciece CS 4786) Lecture & 3: Pricipal Compoet Aalysis The text i black outlies high level ideas. The text i blue provides simple mathematical details to derive or get to the algorithm

More information

Lecture 4 The Simple Random Walk

Lecture 4 The Simple Random Walk Lecture 4: The Simple Radom Walk 1 of 9 Course: M36K Itro to Stochastic Processes Term: Fall 014 Istructor: Gorda Zitkovic Lecture 4 The Simple Radom Walk We have defied ad costructed a radom walk {X }

More information

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Lecture 16

Discrete Mathematics and Probability Theory Spring 2014 Anant Sahai Lecture 16 EECS 70 Discrete Mathematics ad Probability Theory Sprig 2014 Aat Sahai Lecture 16 Variace Questio: Let us retur oce agai to the questio of how may heads i a typical sequece of coi flips. Recall that we

More information

Agnostic Learning and Concentration Inequalities

Agnostic Learning and Concentration Inequalities ECE901 Sprig 2004 Statistical Regularizatio ad Learig Theory Lecture: 7 Agostic Learig ad Cocetratio Iequalities Lecturer: Rob Nowak Scribe: Aravid Kailas 1 Itroductio 1.1 Motivatio I the last lecture

More information