PROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez

Size: px
Start display at page:

Download "PROBABILITY AND STATISTICS IN COMPUTING. III. Discrete Random Variables Expectation and Deviations From: [5][7][6] German Hernandez"

Transcription

1 Conditional PROBABILITY AND STATISTICS IN COMPUTING III. Discrete Random Variables and Deviations From: [5][7][6] Page of 46 German Hernandez

2 Conditional. Random variables.. Measurable function Let (Ω, A) and (Θ, G) be measurable spaces. Υ : Ω Θ is called a measurable function if A function Υ (G) A, for all G G, Page 2 of 46 i.e., the pre-image of a measurable set is measurable. In order to define real random variables we use a special σ- algebra on the real set, called the Borel σ-algebra and noted B(R) or simply B. This is the minimal σ-algebra on R that contains the sets of the form (, x] for all x R.

3 Conditional Page 3 of Random variable A random variable (r.v.) X is a Borel measurable function defined on a probability space. Here a r.v. will be consider only real random variables, i.e., here a r.v. is of the form: X : (Ω, A, P ) (R, B) A r.v. can be seen as a translation or encoding of the outcomes of a random experiment into the real set. The condition of Borel measurability allows us to translate or encode the probabilistic structure of the experiment in a consistent manner from the events of the random experiment to Borel sets.

4 Conditional Random Variable X : Ω R ω ω 2 ω3 : : ω k Page 4 of 46 (Ω,A,P) Probability Space R

5 Conditional Page 5 of Induced probability by a r.v. A random variable X : Ω R from a probability space (Ω, A, P ) defines an induced probability measure P X on (R, B) defined as P X : R [0, ] B P (X (B)) Usually, it is difficult to work directly with the induced probability. The alternative is to work with a function on R, called a distribution function, that encodes all the relevant information of the induced probability.

6 Conditional.4. Distribution function of a r.v. The distribution function or cumulative distribution function of a r.v. X, noted F X, is defined as F X : R R x P ( X x ) = P ( X ((, x]) ) Distribution Function of X F (x) =P (X ((-,x ]) ) Page 6 of 46 - X - Borel set (-,x ] - X ((-,x ]) P x X 0 Random Variable (Ω,Α,P) Probability Space

7 Conditional Page 7 of 46 The distribution function has the following properties: (i) lim x F X (x) = 0 and lim x F X (x) =, (ii) F X is a nondecreasing function, and, (iii) right continuous.

8 Conditional Page 8 of Types of a random variables. A random variable is called discrete if its range is countable. In this case we can define the probability mass function f X (x) = P ( X = x ). A random variable is called continuous if its distribution function F X is continuous and is called absolutely continuous if there exists a non-negative Riemann integrable function, called the probability density of the r.v., f X (x) : B B such that F X (x) = x f X (τ)dτ.

9 Conditional Page 9 of Let X be a discrete random variable taking on the values x, x 2,..., x k,.... The mean value or expectation of X, denoted by E[X] is defined by E[X] = x k P (X = x k ). k= Let X be a continuous random variable taking values on R. Then E[X] is defined by E[X] = xf X (τ)dτ.

10 Conditional Page 0 of 46 Example Indicator random variable Given an event A the indicator random variable of the event, denoted I A or sometimes I{A}, is equal to if A occurs, or 0 if A does not occur. F {, if A occurs I A = 0, if A c occurs Find the its expectation? E[I A ] = P (A) + 0 P (A c ) = P (A) If random variable X only takes on the value 0 or is called Bernoulli random variable and its expected value E[X] = P (). Example 2 Geometric random variable Consider a sequence of independent trials, each of which is a success with probability p (0, ), and a failure with probability p. If X represents the trial number of the first success, then X is said to be a geometric random variable with parameter p. Compute its expected value.

11 Conditional We have that P {X = n} = p( p) n, n > Hence, E[X] = np( p) n = p n( p) n = p n= n= Page of 46 Using the identity na n = n= n=0 d a n da = d n=0 an da = d ( a) da = ( a) 2

12 Conditional Example 3 Unbounded expectation Page 2 of 46 Then P (X = 2 i ) = 2i, i =, 2,... E[X] = 2 i 2 = = i

13 Conditional Theorem Let X be a discrete variable that takes only nonnegative integer values. Then E[X] = P (X i). Page 3 of 46 Proof. P (X i) = j=i P (X = j) = j j= P (X = j) = j= jp (X = j) = E[X]

14 Conditional Page 4 of 46 Example 4 Geometric random variable Hence P (X i) = p( p) n = ( p) i. n=i E[X] = = = ( p) = p P (X j) ( p)i

15 2.. Linearity Random variables Conditional Page 5 of 46 Given a family of r.v s {X},..n [ n ] n E X i = E [X i ] Proof. E [X + Y] = x y (x + y)p ((X = x) (Y = y)) = x y xp ((X = x) (Y = y)) + x y yp ((X = x) (Y = y)) = x x y P ((X = x) (Y = y)) + y y x P ((X = x) (Y = y)) = xp ((X = x)) + i y yp ((Y = y)) = E [X] + E [Y] Theorem 2 E [cx i ] = ce [X i ]

16 Conditional Page 6 of 46 Example 5 Bubble sort Let X denote the number of comparison in needed by Bubble sort. Obtain upper ad lower bounds for E(X). Bubble-Sort(A[ n]) for i = to n 2 do for j = to n i 3 do if A[j] > A[j + ] 4 then A[j] A[j + ] 5 If no swap occurs exit

17 n= Random variables Conditional j= i= j= j= j= j= j= j= Page 7 of 46 j= j=2 i=3 i= j=

18 In the worst case bubble sort requires Random variables Conditional Page 8 of 46 (n ) + (n 2) = n(n ) 2 then n(n ) E(X) 2 In order to obtain a lower bound we need the concept of number of inversion in a permutation. For a permutation i, i 2,, i n of, 2,, n we say that a ordered pair (i, j) is an inversion of the permutation if i < j and j precedes i in the permutation. For instance the permutation 2, 4,, 5, 6, 3 has five inversions: namely, (, 2), (, 4)(3, 4), (3, 5), (3, 6). Because the values of every inversion pair will eventually have to be interchanged (and thus compared) it follows that the number of comparisons made by bubble sort is at least as large as the number of inversions of the initial ordering.

19 Conditional Then if I denotes the number of inversions I X which implies that E[I] E[X] Let I(i, j) for i < j be {, if (i, j) is an inversion of the initial ordering I(i, j) = 0, otherwise, Page 9 of 46 then it follows that I = j I(i, j) i<j

20 Conditional Hence, using linearity of the expectation E[I] = E[I(i, j)] j i<j Now, for i < j Page 20 of 46 E[I(i, j)] = P {j precedes i in the initial ordering} = 2 ( ) n There are pairs i, j for which i < j, then it follows 2 that E[I] = j E[I(i, j)] = i<j ( n 2 2 ) = n(n ) 4 Thus n(n ) n(n ) E[X] 4 2 E[X] = Θ(n 2 )

21 Conditional Page 2 of The Quicksort and Find Algorithms Suppose that we want to sort a given set of n distinc values, x, x 2,..., x n. A more efficient algorithm than bubble sort for doing so is the quicksort algorithm, which is recursively defined as follows. When n = 2, the algorithm compares the two values an puts them in the appropriate order. Whem n > 2, one of the values is chosen, say it is x i, and then all of the other values are compared with x i. Those smaller than x i, are put in bracket to the left of x i, and those larger than x i,are put in a bracket to the right of x i. The algorithm the repeats itself in these brackets, continuing until all values have been sorted. For instance, suppose that we desire to sort the following 0 distinct values:

22 Conditional Page 22 of 46 5, 9, 3, 0,, 4, 8, 4, 7, 6 {5, 9, 3, 8, 4, 6,} 0, {, 4, 7,} {5, 3, 4}, 6, {9, 8,} 0, {, 4, 7,} {3}, 4, {5}, 6, {9, 8,} 0, {, 4, 7,} It is intuitively that the worst case occurs when every comparison value chosen is an extreme value. In this worst scenario, the number of comparisons need is (n ) + (n 2) = n(n )/2. A better indication by determinig the average number of comparisons needed when the compararison value are randomly chosen.

23 Conditional Page 23 of 46 Let X denote the number of comparisons needed. Let denote the smallest value, let 2 denote the second value smallest, and so on. Then, for i < j n, let I(i, j) equal if i and j are ever directly compared, and let it equal 0 otherwise. which implies that n E[X] = E[ = = n X = n j=i+ n j=i+ n n j=i+ n n j=i+ I(i, j)] E[I(i, j)] I(i, j) P {i and j are ever compared}

24 Conditional Page 24 of 46 To determine the probability that i and j are ever compared, note that the values i, i +,..., j, j will initially be in the same bracket and will remain in the same bracket if the number chosen for the first comparison is not between i and j. Thus, the probability that it is i or j is 2/(j i + ). Therefore, P {i and j are ever compared} = 2 j i +

25 Consequently, we see that Random variables Conditional Page 25 of 46 For large n E[X] = n n j=i+ n = 2 = 2 = 2 n k=2 n i+ k=2 n+ k n 2 j i + = 2 ( n i + ) k k n (n + k) k k=2 = 2(n + ) = (2n + 2) n k=2 n k= 2(n ) k k 4n n k= k ln(n) Thus, the quicsort algorithm requieres, on average, approximately 2n log(n) comparison to sort n values.

26 Conditional Page 26 of The Find Algorithm Suppose that we want to find the kth-smallest of a list. The find algorithm is quite similar to quicksort; Suppose that r items are put in the bracket to left. There are now three possibilities: {2, 5, 4, 3}, 6, { 0, 2, 6, 8}. r = k then, the algorithm ends. 2. r < k then, kth smallest value is the (k r)th smallest of the n r values in the right bracket 3. r > k then, search for the kth smallest of the r values in the left bracket.

27 We have, as in the quicksort analysis, Random variables Conditional Page 27 of 46 and E[I(i, j)] = X = n n n j=i+ I(i, j) P {i and j are ever compared} To determine the probability that i and j are ever compared, we consider cases: Case i < j k In this case i, j, k will remain together until one of the values i, i +,..., k is chosen as the comparison value. P {i and j are ever compared} = 2 k i +

28 Conditional Case 2: i k < j Page 28 of 46 Case 3: P {i and j are ever compared} = k < i < j P {i and j are ever compared} = 2 j i + 2 j k +

29 It follow from the preceding that Random variables Conditional Page 29 of 46 n 2 E[X] = j j=2 k i + + n k j=k+ j i + + n j j=k+2 i=k+ j i + To the approximate the preceding when n and k are large, let k = αn for 0 < α <. Now, n j=2 j k k i + = = = k j=i+ k k j=2 k i k i + j j k log(k) k = αn k i +

30 Conditional n k j=k+ n j i + = j=k+ n j=k+ n k ( j k , ) j (log(j) log(j k)) log(x)dx n k log(x)dx n log(n) n (αn log(αn) αn) (n αn) log(n αn) + (n αn) n[ α log(α) ( α) log( α)] As it similarly follows that Page 30 of 46 we see that n j j=k+2 i=k+ j k + = n k = n( α) E[X] 2n[ α log(α) ( α) log( α)] Thus, the mean number of comparison needed by the find algorithm is a linear function of ghe number of values.

31 Conditional Page 3 of Markovs Inequality Let X be a random variable that assumes only non negative values. Then for all a > 0, P {X a} E[X] a

32 Conditional Page 32 of 46 Proof. For a > 0, let I be the indicator variable of {X a}. { if X a I(x) = 0 otherwise then ai X E[aI] E[X] E[I] E[X] a P {X a} = E[I] E[X] a Is interesting for a > E[X] and in particular for a = ne[x]. This inequality can be applied when too little is known about a distribution. In general is too weak to yield useful bounds but it is fundamental to develop other useful bounds.

33 Conditional 5. Variance Let X be a random variable. The variance V ar[x] of X is defined by V ar[x] = E [ (X E[X]) 2] = E[X 2 ] (E[X]) 2. Page 33 of 46 Let X, Y be a random variables. The covariance Cov[X] of X is defined by Cov[X, Y] = E [(X E[X])(Y E[Y])]

34 Conditional 5.. Properties. Cov[X, Y] = E [XY] E[X]E[Y] 2. Cov[X, Y] = Cov[Y, X] 3. Cov[X, X] = V ar [X] 4. Cov[cX, Y] = ccov[x, Y] 5. Cov[X, Y + Z] = Cov[X, Y] + Cov[Y, Z] ( n ) m n m 8. Cov X i, Y j = Cov(X i, Y j ) i=0 j=0 i=0 j=0 7. If X and Yare independent then Cov(X, Y) = 0 Page 34 of V ar ( n X i=0 i) = Cov ( n X i=0 i, n X ) j=0 j = n n Cov(X i=0 j=0 i, X j ) = n Cov(X i=0 i, X i ) + n Cov(X i=0 j i i, X j ) = n V ar(x i=0 i) +2 n Cov(X i=0 j <i i, X j )

35 Conditional Page 35 of 46 Example 6 [7] A coupon collecting problem Suppose that there are m different coupons, and each time one obtains it is equally likely to any of these types. If X is denotes the number of coupons one need to collect to have at least one of each type find the expected value of X. We have m coupons, let us call X the number of picks that we require to get the first type of coupon, X 2 the number or picks required to get two types of coupons, X 3 the number picks that we require get three types of coupons different cards, and so on. In the first step we get one type of coupon, then X =, form that on the number of picks that we require in order get new type of coupon is geometrically distributed with success probability p 2 = m, i.e., m P (X 2 = k) = ( p 2 ) k p 2, for k =, 2, and the expected time to get the second type of coupon is E[X 2 ] = p 2 = m m.

36 Conditional Inductively, once we have gotten i different types of coupons, the time to get the i-th type of coupon X i is geometrically distributed with success probability, i.e., p i = m (i ) m P (X i = k) = ( p i ) k p i, for k =, 2, Page 36 of 46 and the expected time to get the ith type of coupon is E[X i ] = p i = m m i +.

37 Conditional Page 37 of 46 We have that the time to get all the coupons is X = X + X 2 + X m thus, the expected time to o get all the coupons is E[X] = E[X ] + E[X 2 ] + E[X m ] = m + m m { + + m m } = m m m = m m m log m i Is easy to see that the geometric random variables X i are independent and V ar(x i ) = p i p 2 i.

38 Conditional Page 38 of 46 then V ar(x) = and m p 2 i Then m V ar(x i ) = = m 2 m m p i p 2 i (m i + ) 2 = m2 V ar(x) = m 2 m m i m 2 = m p 2 i m i2, and i < m2π2 6 m p i p i = E(X)

39 Conditional Page 39 of 46 Example 7 [7] Another coupon collecting problem Find the expected value and variance of the number of types of coupons after collecting n coupons? Let X be the number of types of coupons that you have in the collection at some point where X i = X = m i=0 X i {, if a type i is in the collection 0, otherwise As X i is a Bernoulli variable we have and E[X] = ( m E[X i ] = m m E[X i ] = m i=0 ( ) n ( ) n ) m m

40 Conditional Page 40 of 46 V ar(x i ) = ( m m ) n ( Also for i j, X i X j si also Bernoulli ( ) n ) m m E[X i X j ] = P (X i X j = ) = P (A i A j ) where A k is the event that contains at lest one coupon of type k. P (A i A j ) = P ((A i A j ) c ) = P (A c i A c j) = P (A c i) P (A c j) + P (A c ia c j) = 2 ( ) m n + ( ) m 2 n m m Therefore, for i j, Cov[X i, X j ] = 2 ( ) m n + ( ) m 2 n ( ( ) m n ) 2 m m m = ( ) m 2 n ( ) m 2n m m Hence V ar(x) = m [( ) m n ( ( m m ) n )] [ m (m 2 ) n +m(m ) ( ) ] m 2n m m

41 Conditional Page 4 of 46 Example 8 [7] Runs Let X, X 2,, X n be a sequence of independent binary random variables with P (X = ) = p. A maximal consecutive subsequence of s is called a run. For instance the sequence, 0,,,, 0, 0,,,, 0 has 3 runs. Let R be number of runs, find E[X], and V ar(x). Let I i = for i =, 2,, n then {, if a run begins at i 0, otherwise R = n I i

42 Conditional Because E[I ] = P (X = ) = p E[I i ] = P (X i = 0, X i = ) = ( p)p, for i > it follows that E[R] = n E[I i ] = p + (n )p( p) Page 42 of 46 To compute the variance n V ar(r) = V ar(i i ) + 2 n Cov(I i, I j ) i<j

43 Conditional Page 43 of 46 Because I i is a Bernoulli random variable we have that V ar(i i ) = E[I i ]( E[I i ]) and also that if i < j I i then I j are independent implying that Cov(I i, I j ) = 0. Moreover I i I i = 0, then Hence, Cov(I i, I j ) = E[I i ]E[I j ] V ar(r) = V ar(i ) + n V ar(i i=2 i) + 2Cov(I, I 2 ) +2 n Cov(I i=3 i, I i ) = p( p) + (n ) [p( p)[ p( p)]] 2p 2 ( p) 2(n 2) [p 2 ( p) 2 ] Application: Number of ranges that of a word in a book index.

44 Conditional 6. Chebyshevs Inequality For any a > 0, P { X E[X] a} V ar[x] a 2 Page 44 of 46

45 Conditional Proof. P ( X E[X] a) = P ([X E[X]] 2 a 2 ) Since [X E[X]] 2 is a nonbegative r.v. then appyting Markov s inequality Page 45 of 46 P {[X E[X]] 2 a 2 } E[X E[X]]2 a 2 = V ar[x] a 2 Is interesting for a = nv ar[x].

46 References Random variables Conditional Page 46 of 46 [] G.R. Grimmett and D.R. Stirzaker. Probability and Random Process. Oxford Science Publications, 998. [2] C.M. Grinstead and J.L. Snell. Introduction to Probability. American Mathematical Society, 997. [3] H.P. Hsu. Probability, Random Variables, and Random Processes. McGraw Hill- Schaum, 996. [4] F.P. Kelly. Probability. Notes form The Archimedeans [5] M. Mitzenmaker and E. Upfal. Probability and Computing: Randomized Algoirthms and probabilitsic Analysis. Cambridge University Press, [6] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 995. [7] S.M. Ross. Probability Models for Computer Science. Academic Press, 2002.

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality

Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Lecture 13 (Part 2): Deviation from mean: Markov s inequality, variance and its properties, Chebyshev s inequality Discrete Structures II (Summer 2018) Rutgers University Instructor: Abhishek Bhrushundi

More information

Lecture 4: Probability and Discrete Random Variables

Lecture 4: Probability and Discrete Random Variables Error Correcting Codes: Combinatorics, Algorithms and Applications (Fall 2007) Lecture 4: Probability and Discrete Random Variables Wednesday, January 21, 2009 Lecturer: Atri Rudra Scribe: Anonymous 1

More information

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416)

SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) SUMMARY OF PROBABILITY CONCEPTS SO FAR (SUPPLEMENT FOR MA416) D. ARAPURA This is a summary of the essential material covered so far. The final will be cumulative. I ve also included some review problems

More information

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23

CIS 121 Data Structures and Algorithms with Java Spring Big-Oh Notation Monday, January 22/Tuesday, January 23 CIS 11 Data Structures and Algorithms with Java Spring 018 Big-Oh Notation Monday, January /Tuesday, January 3 Learning Goals Review Big-Oh and learn big/small omega/theta notations Discuss running time

More information

Lecture 4: Two-point Sampling, Coupon Collector s problem

Lecture 4: Two-point Sampling, Coupon Collector s problem Randomized Algorithms Lecture 4: Two-point Sampling, Coupon Collector s problem Sotiris Nikoletseas Associate Professor CEID - ETY Course 2013-2014 Sotiris Nikoletseas, Associate Professor Randomized Algorithms

More information

Lecture 5: Expectation

Lecture 5: Expectation Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 MA 575 Linear Models: Cedric E Ginestet, Boston University Revision: Probability and Linear Algebra Week 1, Lecture 2 1 Revision: Probability Theory 11 Random Variables A real-valued random variable is

More information

Week 12-13: Discrete Probability

Week 12-13: Discrete Probability Week 12-13: Discrete Probability November 21, 2018 1 Probability Space There are many problems about chances or possibilities, called probability in mathematics. When we roll two dice there are possible

More information

Random Variable. Pr(X = a) = Pr(s)

Random Variable. Pr(X = a) = Pr(s) Random Variable Definition A random variable X on a sample space Ω is a real-valued function on Ω; that is, X : Ω R. A discrete random variable is a random variable that takes on only a finite or countably

More information

1 Stat 605. Homework I. Due Feb. 1, 2011

1 Stat 605. Homework I. Due Feb. 1, 2011 The first part is homework which you need to turn in. The second part is exercises that will not be graded, but you need to turn it in together with the take-home final exam. 1 Stat 605. Homework I. Due

More information

Probability: Handout

Probability: Handout Probability: Handout Klaus Pötzelberger Vienna University of Economics and Business Institute for Statistics and Mathematics E-mail: Klaus.Poetzelberger@wu.ac.at Contents 1 Axioms of Probability 3 1.1

More information

MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM

MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM MARKING A BINARY TREE PROBABILISTIC ANALYSIS OF A RANDOMIZED ALGORITHM XIANG LI Abstract. This paper centers on the analysis of a specific randomized algorithm, a basic random process that involves marking

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Math 510 midterm 3 answers

Math 510 midterm 3 answers Math 51 midterm 3 answers Problem 1 (1 pts) Suppose X and Y are independent exponential random variables both with parameter λ 1. Find the probability that Y < 7X. P (Y < 7X) 7x 7x f(x, y) dy dx e x e

More information

Review of Probability Theory

Review of Probability Theory Review of Probability Theory Arian Maleki and Tom Do Stanford University Probability theory is the study of uncertainty Through this class, we will be relying on concepts from probability theory for deriving

More information

Bivariate Distributions

Bivariate Distributions STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 17 Néhémy Lim Bivariate Distributions 1 Distributions of Two Random Variables Definition 1.1. Let X and Y be two rrvs on probability space (Ω, A, P).

More information

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get

If g is also continuous and strictly increasing on J, we may apply the strictly increasing inverse function g 1 to this inequality to get 18:2 1/24/2 TOPIC. Inequalities; measures of spread. This lecture explores the implications of Jensen s inequality for g-means in general, and for harmonic, geometric, arithmetic, and related means in

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Lecture 22: Variance and Covariance

Lecture 22: Variance and Covariance EE5110 : Probability Foundations for Electrical Engineers July-November 2015 Lecture 22: Variance and Covariance Lecturer: Dr. Krishna Jagannathan Scribes: R.Ravi Kiran In this lecture we will introduce

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Lecture 25: Review. Statistics 104. April 23, Colin Rundel

Lecture 25: Review. Statistics 104. April 23, Colin Rundel Lecture 25: Review Statistics 104 Colin Rundel April 23, 2012 Joint CDF F (x, y) = P [X x, Y y] = P [(X, Y ) lies south-west of the point (x, y)] Y (x,y) X Statistics 104 (Colin Rundel) Lecture 25 April

More information

1 Random Variable: Topics

1 Random Variable: Topics Note: Handouts DO NOT replace the book. In most cases, they only provide a guideline on topics and an intuitive feel. 1 Random Variable: Topics Chap 2, 2.1-2.4 and Chap 3, 3.1-3.3 What is a random variable?

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

3. DISCRETE RANDOM VARIABLES

3. DISCRETE RANDOM VARIABLES IA Probability Lent Term 3 DISCRETE RANDOM VARIABLES 31 Introduction When an experiment is conducted there may be a number of quantities associated with the outcome ω Ω that may be of interest Suppose

More information

Algorithms, CSE, OSU Quicksort. Instructor: Anastasios Sidiropoulos

Algorithms, CSE, OSU Quicksort. Instructor: Anastasios Sidiropoulos 6331 - Algorithms, CSE, OSU Quicksort Instructor: Anastasios Sidiropoulos Sorting Given an array of integers A[1... n], rearrange its elements so that A[1] A[2]... A[n]. Quicksort Quicksort(A, p, r) if

More information

Probability Theory and Statistics. Peter Jochumzen

Probability Theory and Statistics. Peter Jochumzen Probability Theory and Statistics Peter Jochumzen April 18, 2016 Contents 1 Probability Theory And Statistics 3 1.1 Experiment, Outcome and Event................................ 3 1.2 Probability............................................

More information

Lectures on Elementary Probability. William G. Faris

Lectures on Elementary Probability. William G. Faris Lectures on Elementary Probability William G. Faris February 22, 2002 2 Contents 1 Combinatorics 5 1.1 Factorials and binomial coefficients................. 5 1.2 Sampling with replacement.....................

More information

Expectation, inequalities and laws of large numbers

Expectation, inequalities and laws of large numbers Chapter 3 Expectation, inequalities and laws of large numbers 3. Expectation and Variance Indicator random variable Let us suppose that the event A partitions the sample space S, i.e. A A S. The indicator

More information

PROBABILITY VITTORIA SILVESTRI

PROBABILITY VITTORIA SILVESTRI PROBABILITY VITTORIA SILVESTRI Contents Preface 2 1. Introduction 3 2. Combinatorial analysis 6 3. Stirling s formula 9 4. Properties of Probability measures 12 5. Independence 17 6. Conditional probability

More information

Lecture 20 : Markov Chains

Lecture 20 : Markov Chains CSCI 3560 Probability and Computing Instructor: Bogdan Chlebus Lecture 0 : Markov Chains We consider stochastic processes. A process represents a system that evolves through incremental changes called

More information

Bucket-Sort. Have seen lower bound of Ω(nlog n) for comparisonbased. Some cheating algorithms achieve O(n), given certain assumptions re input

Bucket-Sort. Have seen lower bound of Ω(nlog n) for comparisonbased. Some cheating algorithms achieve O(n), given certain assumptions re input Bucket-Sort Have seen lower bound of Ω(nlog n) for comparisonbased sorting algs Some cheating algorithms achieve O(n), given certain assumptions re input One example: bucket sort Assumption: input numbers

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

STAT 430/510: Lecture 16

STAT 430/510: Lecture 16 STAT 430/510: Lecture 16 James Piette June 24, 2010 Updates HW4 is up on my website. It is due next Mon. (June 28th). Starting today back at section 6.7 and will begin Ch. 7. Joint Distribution of Functions

More information

Some Concepts of Probability (Review) Volker Tresp Summer 2018

Some Concepts of Probability (Review) Volker Tresp Summer 2018 Some Concepts of Probability (Review) Volker Tresp Summer 2018 1 Definition There are different way to define what a probability stands for Mathematically, the most rigorous definition is based on Kolmogorov

More information

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1).

2. Suppose (X, Y ) is a pair of random variables uniformly distributed over the triangle with vertices (0, 0), (2, 0), (2, 1). Name M362K Final Exam Instructions: Show all of your work. You do not have to simplify your answers. No calculators allowed. There is a table of formulae on the last page. 1. Suppose X 1,..., X 1 are independent

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable

Lecture Notes 1 Probability and Random Variables. Conditional Probability and Independence. Functions of a Random Variable Lecture Notes 1 Probability and Random Variables Probability Spaces Conditional Probability and Independence Random Variables Functions of a Random Variable Generation of a Random Variable Jointly Distributed

More information

Properties of Summation Operator

Properties of Summation Operator Econ 325 Section 003/004 Notes on Variance, Covariance, and Summation Operator By Hiro Kasahara Properties of Summation Operator For a sequence of the values {x 1, x 2,..., x n, we write the sum of x 1,

More information

Expectation of geometric distribution

Expectation of geometric distribution Expectation of geometric distribution What is the probability that X is finite? Can now compute E(X): Σ k=1f X (k) = Σ k=1(1 p) k 1 p = pσ j=0(1 p) j = p 1 1 (1 p) = 1 E(X) = Σ k=1k (1 p) k 1 p = p [ Σ

More information

Lecture 3 - Expectation, inequalities and laws of large numbers

Lecture 3 - Expectation, inequalities and laws of large numbers Lecture 3 - Expectation, inequalities and laws of large numbers Jan Bouda FI MU April 19, 2009 Jan Bouda (FI MU) Lecture 3 - Expectation, inequalities and laws of large numbersapril 19, 2009 1 / 67 Part

More information

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 9 Fall 2007

UC Berkeley Department of Electrical Engineering and Computer Science. EE 126: Probablity and Random Processes. Problem Set 9 Fall 2007 UC Berkeley Department of Electrical Engineering and Computer Science EE 26: Probablity and Random Processes Problem Set 9 Fall 2007 Issued: Thursday, November, 2007 Due: Friday, November 9, 2007 Reading:

More information

Search Algorithms. Analysis of Algorithms. John Reif, Ph.D. Prepared by

Search Algorithms. Analysis of Algorithms. John Reif, Ph.D. Prepared by Search Algorithms Analysis of Algorithms Prepared by John Reif, Ph.D. Search Algorithms a) Binary Search: average case b) Interpolation Search c) Unbounded Search (Advanced material) Readings Reading Selection:

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

CS 246 Review of Proof Techniques and Probability 01/14/19

CS 246 Review of Proof Techniques and Probability 01/14/19 Note: This document has been adapted from a similar review session for CS224W (Autumn 2018). It was originally compiled by Jessica Su, with minor edits by Jayadev Bhaskaran. 1 Proof techniques Here we

More information

Final Review: Problem Solving Strategies for Stat 430

Final Review: Problem Solving Strategies for Stat 430 Final Review: Problem Solving Strategies for Stat 430 Hyunseung Kang December 14, 011 This document covers the material from the last 1/3 of the class. It s not comprehensive nor is it complete (because

More information

STOR Lecture 16. Properties of Expectation - I

STOR Lecture 16. Properties of Expectation - I STOR 435.001 Lecture 16 Properties of Expectation - I Jan Hannig UNC Chapel Hill 1 / 22 Motivation Recall we found joint distributions to be pretty complicated objects. Need various tools from combinatorics

More information

Expectation of Random Variables

Expectation of Random Variables 1 / 19 Expectation of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 13, 2015 2 / 19 Expectation of Discrete

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition)

Exam P Review Sheet. for a > 0. ln(a) i=0 ari = a. (1 r) 2. (Note that the A i s form a partition) Exam P Review Sheet log b (b x ) = x log b (y k ) = k log b (y) log b (y) = ln(y) ln(b) log b (yz) = log b (y) + log b (z) log b (y/z) = log b (y) log b (z) ln(e x ) = x e ln(y) = y for y > 0. d dx ax

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

HW5 Solutions. (a) (8 pts.) Show that if two random variables X and Y are independent, then E[XY ] = E[X]E[Y ] xy p X,Y (x, y)

HW5 Solutions. (a) (8 pts.) Show that if two random variables X and Y are independent, then E[XY ] = E[X]E[Y ] xy p X,Y (x, y) HW5 Solutions 1. (50 pts.) Random homeworks again (a) (8 pts.) Show that if two random variables X and Y are independent, then E[XY ] = E[X]E[Y ] Answer: Applying the definition of expectation we have

More information

Randomized Algorithms

Randomized Algorithms Randomized Algorithms Andreas Klappenecker Texas A&M University Lecture notes of a course given in Fall 2003. Preliminary draft. c 2003 by Andreas Klappenecker. All rights reserved. Chapter 2 Probability

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

PROBABILITY VITTORIA SILVESTRI

PROBABILITY VITTORIA SILVESTRI PROBABILITY VITTORIA SILVESTRI Contents Preface. Introduction 2 2. Combinatorial analysis 5 3. Stirling s formula 8 4. Properties of Probability measures Preface These lecture notes are for the course

More information

ECE353: Probability and Random Processes. Lecture 5 - Cumulative Distribution Function and Expectation

ECE353: Probability and Random Processes. Lecture 5 - Cumulative Distribution Function and Expectation ECE353: Probability and Random Processes Lecture 5 - Cumulative Distribution Function and Expectation Xiao Fu School of Electrical Engineering and Computer Science Oregon State University E-mail: xiao.fu@oregonstate.edu

More information

1 Variance of a Random Variable

1 Variance of a Random Variable Indian Institute of Technology Bombay Department of Electrical Engineering Handout 14 EE 325 Probability and Random Processes Lecture Notes 9 August 28, 2014 1 Variance of a Random Variable The expectation

More information

Quicksort (CLRS 7) We previously saw how the divide-and-conquer technique can be used to design sorting algorithm Merge-sort

Quicksort (CLRS 7) We previously saw how the divide-and-conquer technique can be used to design sorting algorithm Merge-sort Quicksort (CLRS 7) We previously saw how the divide-and-conquer technique can be used to design sorting algorithm Merge-sort Partition n elements array A into two subarrays of n/2 elements each Sort the

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

Lecture 4: Sampling, Tail Inequalities

Lecture 4: Sampling, Tail Inequalities Lecture 4: Sampling, Tail Inequalities Variance and Covariance Moment and Deviation Concentration and Tail Inequalities Sampling and Estimation c Hung Q. Ngo (SUNY at Buffalo) CSE 694 A Fun Course 1 /

More information

Analysis of Algorithms. Randomizing Quicksort

Analysis of Algorithms. Randomizing Quicksort Analysis of Algorithms Randomizing Quicksort Randomizing Quicksort Randomly permute the elements of the input array before sorting OR... modify the PARTITION procedure At each step of the algorithm we

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Problem 1. Problem 2. Problem 3. Problem 4

Problem 1. Problem 2. Problem 3. Problem 4 Problem Let A be the event that the fungus is present, and B the event that the staph-bacteria is present. We have P A = 4, P B = 9, P B A =. We wish to find P AB, to do this we use the multiplication

More information

X = X X n, + X 2

X = X X n, + X 2 CS 70 Discrete Mathematics for CS Fall 2003 Wagner Lecture 22 Variance Question: At each time step, I flip a fair coin. If it comes up Heads, I walk one step to the right; if it comes up Tails, I walk

More information

Mathematical Statistics 1 Math A 6330

Mathematical Statistics 1 Math A 6330 Mathematical Statistics 1 Math A 6330 Chapter 3 Common Families of Distributions Mohamed I. Riffi Department of Mathematics Islamic University of Gaza September 28, 2015 Outline 1 Subjects of Lecture 04

More information

Chapter 7. Basic Probability Theory

Chapter 7. Basic Probability Theory Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries

More information

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay

Random Variables. Saravanan Vijayakumaran Department of Electrical Engineering Indian Institute of Technology Bombay 1 / 13 Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay August 8, 2013 2 / 13 Random Variable Definition A real-valued

More information

Biostat Methods STAT 5820/6910 Handout #5a: Misc. Issues in Logistic Regression

Biostat Methods STAT 5820/6910 Handout #5a: Misc. Issues in Logistic Regression Biostat Methods STAT 5820/6910 Handout #5a: Misc. Issues in Logistic Regression Recall general χ 2 test setu: Y 0 1 Trt 0 a b Trt 1 c d I. Basic logistic regression Previously (Handout 4a): χ 2 test of

More information

More than one variable

More than one variable Chapter More than one variable.1 Bivariate discrete distributions Suppose that the r.v. s X and Y are discrete and take on the values x j and y j, j 1, respectively. Then the joint p.d.f. of X and Y, to

More information

Expectation of geometric distribution. Variance and Standard Deviation. Variance: Examples

Expectation of geometric distribution. Variance and Standard Deviation. Variance: Examples Expectation of geometric distribution Variance and Standard Deviation What is the probability that X is finite? Can now compute E(X): Σ k=f X (k) = Σ k=( p) k p = pσ j=0( p) j = p ( p) = E(X) = Σ k=k (

More information

Introduction to Information Entropy Adapted from Papoulis (1991)

Introduction to Information Entropy Adapted from Papoulis (1991) Introduction to Information Entropy Adapted from Papoulis (1991) Federico Lombardo Papoulis, A., Probability, Random Variables and Stochastic Processes, 3rd edition, McGraw ill, 1991. 1 1. INTRODUCTION

More information

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES

IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES IAM 530 ELEMENTS OF PROBABILITY AND STATISTICS LECTURE 3-RANDOM VARIABLES VARIABLE Studying the behavior of random variables, and more importantly functions of random variables is essential for both the

More information

Joint Distribution of Two or More Random Variables

Joint Distribution of Two or More Random Variables Joint Distribution of Two or More Random Variables Sometimes more than one measurement in the form of random variable is taken on each member of the sample space. In cases like this there will be a few

More information

CONVERGENCE OF RANDOM SERIES AND MARTINGALES

CONVERGENCE OF RANDOM SERIES AND MARTINGALES CONVERGENCE OF RANDOM SERIES AND MARTINGALES WESLEY LEE Abstract. This paper is an introduction to probability from a measuretheoretic standpoint. After covering probability spaces, it delves into the

More information

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques

1 Proof techniques. CS 224W Linear Algebra, Probability, and Proof Techniques 1 Proof techniques Here we will learn to prove universal mathematical statements, like the square of any odd number is odd. It s easy enough to show that this is true in specific cases for example, 3 2

More information

1 Probability theory. 2 Random variables and probability theory.

1 Probability theory. 2 Random variables and probability theory. Probability theory Here we summarize some of the probability theory we need. If this is totally unfamiliar to you, you should look at one of the sources given in the readings. In essence, for the major

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

1 Basic continuous random variable problems

1 Basic continuous random variable problems Name M362K Final Here are problems concerning material from Chapters 5 and 6. To review the other chapters, look over previous practice sheets for the two exams, previous quizzes, previous homeworks and

More information

FE 5204 Stochastic Differential Equations

FE 5204 Stochastic Differential Equations Instructor: Jim Zhu e-mail:zhu@wmich.edu http://homepages.wmich.edu/ zhu/ January 20, 2009 Preliminaries for dealing with continuous random processes. Brownian motions. Our main reference for this lecture

More information

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th

Randomized Sorting Algorithms Quick sort can be converted to a randomized algorithm by picking the pivot element randomly. In this case we can show th CSE 3500 Algorithms and Complexity Fall 2016 Lecture 10: September 29, 2016 Quick sort: Average Run Time In the last lecture we started analyzing the expected run time of quick sort. Let X = k 1, k 2,...,

More information

Algorithms for Uncertainty Quantification

Algorithms for Uncertainty Quantification Algorithms for Uncertainty Quantification Tobias Neckel, Ionuț-Gabriel Farcaș Lehrstuhl Informatik V Summer Semester 2017 Lecture 2: Repetition of probability theory and statistics Example: coin flip Example

More information

Elementary Probability. Exam Number 38119

Elementary Probability. Exam Number 38119 Elementary Probability Exam Number 38119 2 1. Introduction Consider any experiment whose result is unknown, for example throwing a coin, the daily number of customers in a supermarket or the duration of

More information

Homework 10 (due December 2, 2009)

Homework 10 (due December 2, 2009) Homework (due December, 9) Problem. Let X and Y be independent binomial random variables with parameters (n, p) and (n, p) respectively. Prove that X + Y is a binomial random variable with parameters (n

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

Notes for Math 324, Part 19

Notes for Math 324, Part 19 48 Notes for Math 324, Part 9 Chapter 9 Multivariate distributions, covariance Often, we need to consider several random variables at the same time. We have a sample space S and r.v. s X, Y,..., which

More information

Biostat Methods STAT 5500/6500 Handout #12: Methods and Issues in (Binary Response) Logistic Regression

Biostat Methods STAT 5500/6500 Handout #12: Methods and Issues in (Binary Response) Logistic Regression Biostat Methods STAT 5500/6500 Handout #12: Methods and Issues in (Binary Resonse) Logistic Regression Recall general χ 2 test setu: Y 0 1 Trt 0 a b Trt 1 c d I. Basic logistic regression Previously (Handout

More information

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes

Lecture Notes 7 Random Processes. Markov Processes Markov Chains. Random Processes Lecture Notes 7 Random Processes Definition IID Processes Bernoulli Process Binomial Counting Process Interarrival Time Process Markov Processes Markov Chains Classification of States Steady State Probabilities

More information

Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( )

Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr( ) Theorem 1.7 [Bayes' Law]: Assume that,,, are mutually disjoint events in the sample space s.t.. Then Pr Pr = Pr Pr Pr() Pr Pr. We are given three coins and are told that two of the coins are fair and the

More information

Lecture 4. Quicksort

Lecture 4. Quicksort Lecture 4. Quicksort T. H. Cormen, C. E. Leiserson and R. L. Rivest Introduction to Algorithms, 3rd Edition, MIT Press, 2009 Sungkyunkwan University Hyunseung Choo choo@skku.edu Copyright 2000-2018 Networking

More information

Convergence of Random Variables

Convergence of Random Variables 1 / 15 Convergence of Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay March 19, 2014 2 / 15 Motivation Theorem (Weak

More information

Introduction to discrete probability. The rules Sample space (finite except for one example)

Introduction to discrete probability. The rules Sample space (finite except for one example) Algorithms lecture notes 1 Introduction to discrete probability The rules Sample space (finite except for one example) say Ω. P (Ω) = 1, P ( ) = 0. If the items in the sample space are {x 1,..., x n }

More information

CHAPTER 3: LARGE SAMPLE THEORY

CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 1 CHAPTER 3: LARGE SAMPLE THEORY CHAPTER 3 LARGE SAMPLE THEORY 2 Introduction CHAPTER 3 LARGE SAMPLE THEORY 3 Why large sample theory studying small sample property is usually

More information

CS 161 Summer 2009 Homework #2 Sample Solutions

CS 161 Summer 2009 Homework #2 Sample Solutions CS 161 Summer 2009 Homework #2 Sample Solutions Regrade Policy: If you believe an error has been made in the grading of your homework, you may resubmit it for a regrade. If the error consists of more than

More information

Notes 6 : First and second moment methods

Notes 6 : First and second moment methods Notes 6 : First and second moment methods Math 733-734: Theory of Probability Lecturer: Sebastien Roch References: [Roc, Sections 2.1-2.3]. Recall: THM 6.1 (Markov s inequality) Let X be a non-negative

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Econ 371 Problem Set #1 Answer Sheet

Econ 371 Problem Set #1 Answer Sheet Econ 371 Problem Set #1 Answer Sheet 2.1 In this question, you are asked to consider the random variable Y, which denotes the number of heads that occur when two coins are tossed. a. The first part of

More information